2026-03-18 01:17:20.456116 | Job console starting 2026-03-18 01:17:20.476132 | Updating git repos 2026-03-18 01:17:20.577963 | Cloning repos into workspace 2026-03-18 01:17:20.834214 | Restoring repo states 2026-03-18 01:17:20.856440 | Merging changes 2026-03-18 01:17:20.856473 | Checking out repos 2026-03-18 01:17:21.169372 | Preparing playbooks 2026-03-18 01:17:21.912776 | Running Ansible setup 2026-03-18 01:17:26.471461 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-18 01:17:27.249042 | 2026-03-18 01:17:27.249226 | PLAY [Base pre] 2026-03-18 01:17:27.266866 | 2026-03-18 01:17:27.267051 | TASK [Setup log path fact] 2026-03-18 01:17:27.287871 | orchestrator | ok 2026-03-18 01:17:27.305809 | 2026-03-18 01:17:27.305985 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-18 01:17:27.346966 | orchestrator | ok 2026-03-18 01:17:27.358932 | 2026-03-18 01:17:27.359098 | TASK [emit-job-header : Print job information] 2026-03-18 01:17:27.398928 | # Job Information 2026-03-18 01:17:27.399129 | Ansible Version: 2.16.14 2026-03-18 01:17:27.399166 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-18 01:17:27.399201 | Pipeline: periodic-midnight 2026-03-18 01:17:27.399225 | Executor: 521e9411259a 2026-03-18 01:17:27.399246 | Triggered by: https://github.com/osism/testbed 2026-03-18 01:17:27.399267 | Event ID: 3f8ad891ac4b40f0a95283030238698e 2026-03-18 01:17:27.406072 | 2026-03-18 01:17:27.406193 | LOOP [emit-job-header : Print node information] 2026-03-18 01:17:27.545902 | orchestrator | ok: 2026-03-18 01:17:27.546262 | orchestrator | # Node Information 2026-03-18 01:17:27.546333 | orchestrator | Inventory Hostname: orchestrator 2026-03-18 01:17:27.546376 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-18 01:17:27.546413 | orchestrator | Username: zuul-testbed05 2026-03-18 01:17:27.546450 | orchestrator | Distro: Debian 12.13 2026-03-18 01:17:27.546498 | orchestrator | Provider: static-testbed 2026-03-18 01:17:27.546543 | orchestrator | Region: 2026-03-18 01:17:27.546587 | orchestrator | Label: testbed-orchestrator 2026-03-18 01:17:27.546625 | orchestrator | Product Name: OpenStack Nova 2026-03-18 01:17:27.546661 | orchestrator | Interface IP: 81.163.193.140 2026-03-18 01:17:27.571553 | 2026-03-18 01:17:27.571718 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-18 01:17:28.066230 | orchestrator -> localhost | changed 2026-03-18 01:17:28.074940 | 2026-03-18 01:17:28.075111 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-18 01:17:29.144742 | orchestrator -> localhost | changed 2026-03-18 01:17:29.171367 | 2026-03-18 01:17:29.171524 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-18 01:17:29.470719 | orchestrator -> localhost | ok 2026-03-18 01:17:29.484395 | 2026-03-18 01:17:29.484597 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-18 01:17:29.522285 | orchestrator | ok 2026-03-18 01:17:29.542681 | orchestrator | included: /var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-18 01:17:29.551152 | 2026-03-18 01:17:29.551270 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-18 01:17:30.802456 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-18 01:17:30.802715 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/46f873c7bae843c8b90b2a18e80e4656_id_rsa 2026-03-18 01:17:30.802757 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/46f873c7bae843c8b90b2a18e80e4656_id_rsa.pub 2026-03-18 01:17:30.802784 | orchestrator -> localhost | The key fingerprint is: 2026-03-18 01:17:30.802809 | orchestrator -> localhost | SHA256:5RtNeMMp5Ngy3XnwWSTx+wN1Z+cnaHM14IWZ7JTlfsM zuul-build-sshkey 2026-03-18 01:17:30.802936 | orchestrator -> localhost | The key's randomart image is: 2026-03-18 01:17:30.802980 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-18 01:17:30.803003 | orchestrator -> localhost | | . o.O=o| 2026-03-18 01:17:30.803064 | orchestrator -> localhost | | * +.%+= | 2026-03-18 01:17:30.803086 | orchestrator -> localhost | | + B @.++B| 2026-03-18 01:17:30.803107 | orchestrator -> localhost | | = = =++*| 2026-03-18 01:17:30.803128 | orchestrator -> localhost | | S o =.oE+| 2026-03-18 01:17:30.803153 | orchestrator -> localhost | | + o..=| 2026-03-18 01:17:30.803174 | orchestrator -> localhost | | . ..| 2026-03-18 01:17:30.803194 | orchestrator -> localhost | | .| 2026-03-18 01:17:30.803214 | orchestrator -> localhost | | | 2026-03-18 01:17:30.803235 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-18 01:17:30.803290 | orchestrator -> localhost | ok: Runtime: 0:00:00.697191 2026-03-18 01:17:30.811174 | 2026-03-18 01:17:30.811305 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-18 01:17:30.840389 | orchestrator | ok 2026-03-18 01:17:30.850401 | orchestrator | included: /var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-18 01:17:30.859552 | 2026-03-18 01:17:30.859647 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-18 01:17:30.883122 | orchestrator | skipping: Conditional result was False 2026-03-18 01:17:30.891190 | 2026-03-18 01:17:30.891308 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-18 01:17:31.512691 | orchestrator | changed 2026-03-18 01:17:31.520973 | 2026-03-18 01:17:31.521154 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-18 01:17:31.812631 | orchestrator | ok 2026-03-18 01:17:31.822297 | 2026-03-18 01:17:31.822454 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-18 01:17:32.277089 | orchestrator | ok 2026-03-18 01:17:32.287683 | 2026-03-18 01:17:32.287946 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-18 01:17:32.764350 | orchestrator | ok 2026-03-18 01:17:32.773628 | 2026-03-18 01:17:32.773766 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-18 01:17:32.800223 | orchestrator | skipping: Conditional result was False 2026-03-18 01:17:32.812242 | 2026-03-18 01:17:32.812382 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-18 01:17:33.282724 | orchestrator -> localhost | changed 2026-03-18 01:17:33.311716 | 2026-03-18 01:17:33.311939 | TASK [add-build-sshkey : Add back temp key] 2026-03-18 01:17:33.657118 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/46f873c7bae843c8b90b2a18e80e4656_id_rsa (zuul-build-sshkey) 2026-03-18 01:17:33.657751 | orchestrator -> localhost | ok: Runtime: 0:00:00.019387 2026-03-18 01:17:33.674954 | 2026-03-18 01:17:33.675207 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-18 01:17:34.106978 | orchestrator | ok 2026-03-18 01:17:34.116598 | 2026-03-18 01:17:34.116756 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-18 01:17:34.151587 | orchestrator | skipping: Conditional result was False 2026-03-18 01:17:34.209634 | 2026-03-18 01:17:34.209777 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-18 01:17:34.643756 | orchestrator | ok 2026-03-18 01:17:34.659360 | 2026-03-18 01:17:34.659490 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-18 01:17:34.704427 | orchestrator | ok 2026-03-18 01:17:34.714736 | 2026-03-18 01:17:34.714894 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-18 01:17:35.043475 | orchestrator -> localhost | ok 2026-03-18 01:17:35.056171 | 2026-03-18 01:17:35.056333 | TASK [validate-host : Collect information about the host] 2026-03-18 01:17:36.336179 | orchestrator | ok 2026-03-18 01:17:36.353437 | 2026-03-18 01:17:36.353564 | TASK [validate-host : Sanitize hostname] 2026-03-18 01:17:36.421304 | orchestrator | ok 2026-03-18 01:17:36.430150 | 2026-03-18 01:17:36.430313 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-18 01:17:37.025363 | orchestrator -> localhost | changed 2026-03-18 01:17:37.036633 | 2026-03-18 01:17:37.036786 | TASK [validate-host : Collect information about zuul worker] 2026-03-18 01:17:37.576501 | orchestrator | ok 2026-03-18 01:17:37.582310 | 2026-03-18 01:17:37.582426 | TASK [validate-host : Write out all zuul information for each host] 2026-03-18 01:17:38.176400 | orchestrator -> localhost | changed 2026-03-18 01:17:38.198552 | 2026-03-18 01:17:38.198717 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-18 01:17:38.519930 | orchestrator | ok 2026-03-18 01:17:38.529697 | 2026-03-18 01:17:38.529832 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-18 01:18:00.293604 | orchestrator | changed: 2026-03-18 01:18:00.293901 | orchestrator | .d..t...... src/ 2026-03-18 01:18:00.293939 | orchestrator | .d..t...... src/github.com/ 2026-03-18 01:18:00.293965 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-18 01:18:00.293988 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-18 01:18:00.294053 | orchestrator | RedHat.yml 2026-03-18 01:18:00.308884 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-18 01:18:00.308901 | orchestrator | RedHat.yml 2026-03-18 01:18:00.308954 | orchestrator | = 2.2.0"... 2026-03-18 01:18:10.452058 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-18 01:18:10.467323 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-18 01:18:10.613863 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-18 01:18:11.445239 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-18 01:18:11.512408 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-18 01:18:12.033485 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-18 01:18:12.124698 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-18 01:18:12.920735 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-18 01:18:12.920825 | orchestrator | 2026-03-18 01:18:12.920839 | orchestrator | Providers are signed by their developers. 2026-03-18 01:18:12.920849 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-18 01:18:12.920861 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-18 01:18:12.920884 | orchestrator | 2026-03-18 01:18:12.920893 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-18 01:18:12.920919 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-18 01:18:12.920929 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-18 01:18:12.920937 | orchestrator | you run "tofu init" in the future. 2026-03-18 01:18:12.921059 | orchestrator | 2026-03-18 01:18:12.921076 | orchestrator | OpenTofu has been successfully initialized! 2026-03-18 01:18:12.921085 | orchestrator | 2026-03-18 01:18:12.921100 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-18 01:18:12.921108 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-18 01:18:12.921122 | orchestrator | should now work. 2026-03-18 01:18:12.921134 | orchestrator | 2026-03-18 01:18:12.921143 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-18 01:18:12.921151 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-18 01:18:12.921160 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-18 01:18:13.138291 | orchestrator | Created and switched to workspace "ci"! 2026-03-18 01:18:13.138415 | orchestrator | 2026-03-18 01:18:13.138424 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-18 01:18:13.138429 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-18 01:18:13.138435 | orchestrator | for this configuration. 2026-03-18 01:18:13.297410 | orchestrator | ci.auto.tfvars 2026-03-18 01:18:13.299921 | orchestrator | default_custom.tf 2026-03-18 01:18:14.344199 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-18 01:18:14.936769 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-18 01:18:15.162284 | orchestrator | 2026-03-18 01:18:15.162988 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-18 01:18:15.163137 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-18 01:18:15.163565 | orchestrator | + create 2026-03-18 01:18:15.164003 | orchestrator | <= read (data resources) 2026-03-18 01:18:15.164143 | orchestrator | 2026-03-18 01:18:15.164255 | orchestrator | OpenTofu will perform the following actions: 2026-03-18 01:18:15.165989 | orchestrator | 2026-03-18 01:18:15.166047 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-18 01:18:15.166054 | orchestrator | # (config refers to values not yet known) 2026-03-18 01:18:15.166088 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-18 01:18:15.166143 | orchestrator | + checksum = (known after apply) 2026-03-18 01:18:15.166273 | orchestrator | + created_at = (known after apply) 2026-03-18 01:18:15.166356 | orchestrator | + file = (known after apply) 2026-03-18 01:18:15.166425 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.166446 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.166450 | orchestrator | + min_disk_gb = (known after apply) 2026-03-18 01:18:15.166454 | orchestrator | + min_ram_mb = (known after apply) 2026-03-18 01:18:15.166459 | orchestrator | + most_recent = true 2026-03-18 01:18:15.166463 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.166467 | orchestrator | + protected = (known after apply) 2026-03-18 01:18:15.166471 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.166477 | orchestrator | + schema = (known after apply) 2026-03-18 01:18:15.166481 | orchestrator | + size_bytes = (known after apply) 2026-03-18 01:18:15.166485 | orchestrator | + tags = (known after apply) 2026-03-18 01:18:15.166489 | orchestrator | + updated_at = (known after apply) 2026-03-18 01:18:15.166493 | orchestrator | } 2026-03-18 01:18:15.166735 | orchestrator | 2026-03-18 01:18:15.166881 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-18 01:18:15.166944 | orchestrator | # (config refers to values not yet known) 2026-03-18 01:18:15.166948 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-18 01:18:15.166952 | orchestrator | + checksum = (known after apply) 2026-03-18 01:18:15.166956 | orchestrator | + created_at = (known after apply) 2026-03-18 01:18:15.166960 | orchestrator | + file = (known after apply) 2026-03-18 01:18:15.166964 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.166968 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.166972 | orchestrator | + min_disk_gb = (known after apply) 2026-03-18 01:18:15.166976 | orchestrator | + min_ram_mb = (known after apply) 2026-03-18 01:18:15.166980 | orchestrator | + most_recent = true 2026-03-18 01:18:15.166984 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.166988 | orchestrator | + protected = (known after apply) 2026-03-18 01:18:15.166992 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.166996 | orchestrator | + schema = (known after apply) 2026-03-18 01:18:15.166999 | orchestrator | + size_bytes = (known after apply) 2026-03-18 01:18:15.167004 | orchestrator | + tags = (known after apply) 2026-03-18 01:18:15.167007 | orchestrator | + updated_at = (known after apply) 2026-03-18 01:18:15.167011 | orchestrator | } 2026-03-18 01:18:15.167307 | orchestrator | 2026-03-18 01:18:15.167367 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-18 01:18:15.167372 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-18 01:18:15.167376 | orchestrator | + content = (known after apply) 2026-03-18 01:18:15.167380 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-18 01:18:15.167384 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-18 01:18:15.167388 | orchestrator | + content_md5 = (known after apply) 2026-03-18 01:18:15.167392 | orchestrator | + content_sha1 = (known after apply) 2026-03-18 01:18:15.167396 | orchestrator | + content_sha256 = (known after apply) 2026-03-18 01:18:15.167399 | orchestrator | + content_sha512 = (known after apply) 2026-03-18 01:18:15.167403 | orchestrator | + directory_permission = "0777" 2026-03-18 01:18:15.167407 | orchestrator | + file_permission = "0644" 2026-03-18 01:18:15.167411 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-18 01:18:15.167415 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.167419 | orchestrator | } 2026-03-18 01:18:15.167774 | orchestrator | 2026-03-18 01:18:15.167804 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-18 01:18:15.167809 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-18 01:18:15.167813 | orchestrator | + content = (known after apply) 2026-03-18 01:18:15.167817 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-18 01:18:15.167820 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-18 01:18:15.167824 | orchestrator | + content_md5 = (known after apply) 2026-03-18 01:18:15.167828 | orchestrator | + content_sha1 = (known after apply) 2026-03-18 01:18:15.167832 | orchestrator | + content_sha256 = (known after apply) 2026-03-18 01:18:15.167843 | orchestrator | + content_sha512 = (known after apply) 2026-03-18 01:18:15.167847 | orchestrator | + directory_permission = "0777" 2026-03-18 01:18:15.167851 | orchestrator | + file_permission = "0644" 2026-03-18 01:18:15.167862 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-18 01:18:15.167866 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.167870 | orchestrator | } 2026-03-18 01:18:15.168202 | orchestrator | 2026-03-18 01:18:15.168210 | orchestrator | # local_file.inventory will be created 2026-03-18 01:18:15.168214 | orchestrator | + resource "local_file" "inventory" { 2026-03-18 01:18:15.168218 | orchestrator | + content = (known after apply) 2026-03-18 01:18:15.168222 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-18 01:18:15.168226 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-18 01:18:15.168230 | orchestrator | + content_md5 = (known after apply) 2026-03-18 01:18:15.168234 | orchestrator | + content_sha1 = (known after apply) 2026-03-18 01:18:15.168238 | orchestrator | + content_sha256 = (known after apply) 2026-03-18 01:18:15.168242 | orchestrator | + content_sha512 = (known after apply) 2026-03-18 01:18:15.168246 | orchestrator | + directory_permission = "0777" 2026-03-18 01:18:15.168250 | orchestrator | + file_permission = "0644" 2026-03-18 01:18:15.168254 | orchestrator | + filename = "inventory.ci" 2026-03-18 01:18:15.168266 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168270 | orchestrator | } 2026-03-18 01:18:15.168276 | orchestrator | 2026-03-18 01:18:15.168281 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-18 01:18:15.168284 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-18 01:18:15.168288 | orchestrator | + content = (sensitive value) 2026-03-18 01:18:15.168292 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-18 01:18:15.168296 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-18 01:18:15.168300 | orchestrator | + content_md5 = (known after apply) 2026-03-18 01:18:15.168304 | orchestrator | + content_sha1 = (known after apply) 2026-03-18 01:18:15.168307 | orchestrator | + content_sha256 = (known after apply) 2026-03-18 01:18:15.168311 | orchestrator | + content_sha512 = (known after apply) 2026-03-18 01:18:15.168315 | orchestrator | + directory_permission = "0700" 2026-03-18 01:18:15.168319 | orchestrator | + file_permission = "0600" 2026-03-18 01:18:15.168323 | orchestrator | + filename = ".id_rsa.ci" 2026-03-18 01:18:15.168327 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168331 | orchestrator | } 2026-03-18 01:18:15.168335 | orchestrator | 2026-03-18 01:18:15.168340 | orchestrator | # null_resource.node_semaphore will be created 2026-03-18 01:18:15.168344 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-18 01:18:15.168348 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168352 | orchestrator | } 2026-03-18 01:18:15.168358 | orchestrator | 2026-03-18 01:18:15.168362 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-18 01:18:15.168365 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-18 01:18:15.168369 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168373 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168377 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168381 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168385 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168389 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-18 01:18:15.168392 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168396 | orchestrator | + size = 80 2026-03-18 01:18:15.168400 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168404 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168408 | orchestrator | } 2026-03-18 01:18:15.168507 | orchestrator | 2026-03-18 01:18:15.168709 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-18 01:18:15.168715 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.168719 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168723 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168727 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168736 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168740 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168744 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-18 01:18:15.168748 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168752 | orchestrator | + size = 80 2026-03-18 01:18:15.168756 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168760 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168764 | orchestrator | } 2026-03-18 01:18:15.168779 | orchestrator | 2026-03-18 01:18:15.168783 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-18 01:18:15.168787 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.168791 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168794 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168798 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168802 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168806 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168810 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-18 01:18:15.168814 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168818 | orchestrator | + size = 80 2026-03-18 01:18:15.168821 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168825 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168829 | orchestrator | } 2026-03-18 01:18:15.168833 | orchestrator | 2026-03-18 01:18:15.168837 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-18 01:18:15.168841 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.168845 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168848 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168852 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168856 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168860 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168864 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-18 01:18:15.168868 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168871 | orchestrator | + size = 80 2026-03-18 01:18:15.168879 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168883 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168887 | orchestrator | } 2026-03-18 01:18:15.168891 | orchestrator | 2026-03-18 01:18:15.168895 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-18 01:18:15.168899 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.168902 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168906 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168910 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168914 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168918 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168921 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-18 01:18:15.168925 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168929 | orchestrator | + size = 80 2026-03-18 01:18:15.168933 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168937 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168941 | orchestrator | } 2026-03-18 01:18:15.168944 | orchestrator | 2026-03-18 01:18:15.168948 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-18 01:18:15.168952 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.168956 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.168960 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.168964 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.168971 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.168975 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.168979 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-18 01:18:15.168983 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.168987 | orchestrator | + size = 80 2026-03-18 01:18:15.168990 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.168994 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.168998 | orchestrator | } 2026-03-18 01:18:15.169002 | orchestrator | 2026-03-18 01:18:15.169006 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-18 01:18:15.169010 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-18 01:18:15.169013 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169017 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169021 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169025 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.169029 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169032 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-18 01:18:15.169036 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169040 | orchestrator | + size = 80 2026-03-18 01:18:15.169044 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169048 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169052 | orchestrator | } 2026-03-18 01:18:15.169057 | orchestrator | 2026-03-18 01:18:15.169061 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-18 01:18:15.169065 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169069 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169073 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169077 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169081 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169085 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-18 01:18:15.169088 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169092 | orchestrator | + size = 20 2026-03-18 01:18:15.169096 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169100 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169104 | orchestrator | } 2026-03-18 01:18:15.169108 | orchestrator | 2026-03-18 01:18:15.169112 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-18 01:18:15.169116 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169119 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169123 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169127 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169131 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169135 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-18 01:18:15.169139 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169143 | orchestrator | + size = 20 2026-03-18 01:18:15.169147 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169150 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169154 | orchestrator | } 2026-03-18 01:18:15.169158 | orchestrator | 2026-03-18 01:18:15.169162 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-18 01:18:15.169166 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169170 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169173 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169177 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169181 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169185 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-18 01:18:15.169189 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169195 | orchestrator | + size = 20 2026-03-18 01:18:15.169199 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169203 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169207 | orchestrator | } 2026-03-18 01:18:15.169211 | orchestrator | 2026-03-18 01:18:15.169214 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-18 01:18:15.169218 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169222 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169226 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169230 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169237 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169241 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-18 01:18:15.169245 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169249 | orchestrator | + size = 20 2026-03-18 01:18:15.169252 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169256 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169260 | orchestrator | } 2026-03-18 01:18:15.169264 | orchestrator | 2026-03-18 01:18:15.169268 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-18 01:18:15.169272 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169276 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169279 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169283 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169287 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169291 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-18 01:18:15.169295 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169299 | orchestrator | + size = 20 2026-03-18 01:18:15.169302 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169306 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169310 | orchestrator | } 2026-03-18 01:18:15.169314 | orchestrator | 2026-03-18 01:18:15.169318 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-18 01:18:15.169321 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169325 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169329 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169333 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169337 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169340 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-18 01:18:15.169344 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169348 | orchestrator | + size = 20 2026-03-18 01:18:15.169352 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169356 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169360 | orchestrator | } 2026-03-18 01:18:15.169365 | orchestrator | 2026-03-18 01:18:15.169369 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-18 01:18:15.169373 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169377 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169380 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169384 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169388 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169392 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-18 01:18:15.169396 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169400 | orchestrator | + size = 20 2026-03-18 01:18:15.169403 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169407 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169411 | orchestrator | } 2026-03-18 01:18:15.169415 | orchestrator | 2026-03-18 01:18:15.169419 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-18 01:18:15.169423 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169430 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169434 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169438 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169441 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169445 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-18 01:18:15.169449 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169453 | orchestrator | + size = 20 2026-03-18 01:18:15.169457 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169460 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169464 | orchestrator | } 2026-03-18 01:18:15.169468 | orchestrator | 2026-03-18 01:18:15.169472 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-18 01:18:15.169476 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-18 01:18:15.169480 | orchestrator | + attachment = (known after apply) 2026-03-18 01:18:15.169483 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169487 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169491 | orchestrator | + metadata = (known after apply) 2026-03-18 01:18:15.169495 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-18 01:18:15.169499 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169503 | orchestrator | + size = 20 2026-03-18 01:18:15.169507 | orchestrator | + volume_retype_policy = "never" 2026-03-18 01:18:15.169510 | orchestrator | + volume_type = "ssd" 2026-03-18 01:18:15.169514 | orchestrator | } 2026-03-18 01:18:15.169520 | orchestrator | 2026-03-18 01:18:15.169523 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-18 01:18:15.169527 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-18 01:18:15.169531 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.169535 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.169539 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.169543 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.169546 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169550 | orchestrator | + config_drive = true 2026-03-18 01:18:15.169557 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.169561 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.169565 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-18 01:18:15.169568 | orchestrator | + force_delete = false 2026-03-18 01:18:15.169572 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.169576 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169580 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.169584 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.169588 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.169592 | orchestrator | + name = "testbed-manager" 2026-03-18 01:18:15.169595 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.169599 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169603 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.169607 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.169611 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.169615 | orchestrator | + user_data = (sensitive value) 2026-03-18 01:18:15.169619 | orchestrator | 2026-03-18 01:18:15.169623 | orchestrator | + block_device { 2026-03-18 01:18:15.169627 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.169630 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.169634 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.169638 | orchestrator | + multiattach = false 2026-03-18 01:18:15.169642 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.169646 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.169665 | orchestrator | } 2026-03-18 01:18:15.169670 | orchestrator | 2026-03-18 01:18:15.169673 | orchestrator | + network { 2026-03-18 01:18:15.169677 | orchestrator | + access_network = false 2026-03-18 01:18:15.169681 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.169685 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.169689 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.169693 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.169696 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.169700 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.169704 | orchestrator | } 2026-03-18 01:18:15.169708 | orchestrator | } 2026-03-18 01:18:15.169713 | orchestrator | 2026-03-18 01:18:15.169717 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-18 01:18:15.169721 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.169725 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.169729 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.169733 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.169737 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.169741 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169744 | orchestrator | + config_drive = true 2026-03-18 01:18:15.169748 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.169752 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.169756 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.169760 | orchestrator | + force_delete = false 2026-03-18 01:18:15.169763 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.169767 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169771 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.169775 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.169779 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.169783 | orchestrator | + name = "testbed-node-0" 2026-03-18 01:18:15.169786 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.169790 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169794 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.169798 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.169802 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.169805 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.169809 | orchestrator | 2026-03-18 01:18:15.169813 | orchestrator | + block_device { 2026-03-18 01:18:15.169817 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.169821 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.169825 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.169828 | orchestrator | + multiattach = false 2026-03-18 01:18:15.169832 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.169836 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.169840 | orchestrator | } 2026-03-18 01:18:15.169844 | orchestrator | 2026-03-18 01:18:15.169847 | orchestrator | + network { 2026-03-18 01:18:15.169851 | orchestrator | + access_network = false 2026-03-18 01:18:15.169855 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.169859 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.169863 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.169866 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.169870 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.169874 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.169878 | orchestrator | } 2026-03-18 01:18:15.169882 | orchestrator | } 2026-03-18 01:18:15.169887 | orchestrator | 2026-03-18 01:18:15.169892 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-18 01:18:15.169896 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.169899 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.169907 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.169911 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.169914 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.169918 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.169922 | orchestrator | + config_drive = true 2026-03-18 01:18:15.169926 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.169930 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.169934 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.169937 | orchestrator | + force_delete = false 2026-03-18 01:18:15.169941 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.169945 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.169949 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.169953 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.169957 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.169961 | orchestrator | + name = "testbed-node-1" 2026-03-18 01:18:15.169964 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.169968 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.169972 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.169976 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.169980 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.169986 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.169990 | orchestrator | 2026-03-18 01:18:15.169994 | orchestrator | + block_device { 2026-03-18 01:18:15.169998 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.170002 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.170005 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.170009 | orchestrator | + multiattach = false 2026-03-18 01:18:15.170029 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.170034 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170038 | orchestrator | } 2026-03-18 01:18:15.170042 | orchestrator | 2026-03-18 01:18:15.170046 | orchestrator | + network { 2026-03-18 01:18:15.170050 | orchestrator | + access_network = false 2026-03-18 01:18:15.170053 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.170057 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.170061 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.170065 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.170069 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.170072 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170076 | orchestrator | } 2026-03-18 01:18:15.170080 | orchestrator | } 2026-03-18 01:18:15.170086 | orchestrator | 2026-03-18 01:18:15.170090 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-18 01:18:15.170094 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.170098 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.170101 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.170105 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.170109 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.170113 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.170117 | orchestrator | + config_drive = true 2026-03-18 01:18:15.170121 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.170125 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.170129 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.170132 | orchestrator | + force_delete = false 2026-03-18 01:18:15.170136 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.170140 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.170144 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.170151 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.170155 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.170159 | orchestrator | + name = "testbed-node-2" 2026-03-18 01:18:15.170163 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.170167 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.170170 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.170174 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.170178 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.170182 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.170186 | orchestrator | 2026-03-18 01:18:15.170190 | orchestrator | + block_device { 2026-03-18 01:18:15.170194 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.170197 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.170201 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.170205 | orchestrator | + multiattach = false 2026-03-18 01:18:15.170209 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.170213 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170216 | orchestrator | } 2026-03-18 01:18:15.170220 | orchestrator | 2026-03-18 01:18:15.170224 | orchestrator | + network { 2026-03-18 01:18:15.170228 | orchestrator | + access_network = false 2026-03-18 01:18:15.170232 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.170236 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.170239 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.170243 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.170247 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.170251 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170255 | orchestrator | } 2026-03-18 01:18:15.170259 | orchestrator | } 2026-03-18 01:18:15.170264 | orchestrator | 2026-03-18 01:18:15.170271 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-18 01:18:15.170275 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.170279 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.170283 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.170287 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.170290 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.170294 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.170298 | orchestrator | + config_drive = true 2026-03-18 01:18:15.170302 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.170306 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.170309 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.170313 | orchestrator | + force_delete = false 2026-03-18 01:18:15.170317 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.170321 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.170325 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.170329 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.170332 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.170336 | orchestrator | + name = "testbed-node-3" 2026-03-18 01:18:15.170340 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.170344 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.170348 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.170352 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.170355 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.170359 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.170363 | orchestrator | 2026-03-18 01:18:15.170367 | orchestrator | + block_device { 2026-03-18 01:18:15.170371 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.170375 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.170379 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.170385 | orchestrator | + multiattach = false 2026-03-18 01:18:15.170389 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.170393 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170397 | orchestrator | } 2026-03-18 01:18:15.170401 | orchestrator | 2026-03-18 01:18:15.170405 | orchestrator | + network { 2026-03-18 01:18:15.170409 | orchestrator | + access_network = false 2026-03-18 01:18:15.170413 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.170416 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.170420 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.170424 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.170428 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.170432 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170435 | orchestrator | } 2026-03-18 01:18:15.170439 | orchestrator | } 2026-03-18 01:18:15.170445 | orchestrator | 2026-03-18 01:18:15.170449 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-18 01:18:15.170453 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.170456 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.170460 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.170464 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.170468 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.170472 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.170476 | orchestrator | + config_drive = true 2026-03-18 01:18:15.170479 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.170483 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.170487 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.170491 | orchestrator | + force_delete = false 2026-03-18 01:18:15.170495 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.170499 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.170503 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.170506 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.170510 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.170514 | orchestrator | + name = "testbed-node-4" 2026-03-18 01:18:15.170518 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.170522 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.170526 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.170530 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.170533 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.170537 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.170541 | orchestrator | 2026-03-18 01:18:15.170545 | orchestrator | + block_device { 2026-03-18 01:18:15.170549 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.170553 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.170556 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.170560 | orchestrator | + multiattach = false 2026-03-18 01:18:15.170564 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.170568 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170572 | orchestrator | } 2026-03-18 01:18:15.170576 | orchestrator | 2026-03-18 01:18:15.170579 | orchestrator | + network { 2026-03-18 01:18:15.170583 | orchestrator | + access_network = false 2026-03-18 01:18:15.170587 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.170591 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.170595 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.170598 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.170602 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.170606 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.170610 | orchestrator | } 2026-03-18 01:18:15.170614 | orchestrator | } 2026-03-18 01:18:15.170697 | orchestrator | 2026-03-18 01:18:15.170705 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-18 01:18:15.170840 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-18 01:18:15.170844 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-18 01:18:15.170848 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-18 01:18:15.170956 | orchestrator | + all_metadata = (known after apply) 2026-03-18 01:18:15.171047 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.171141 | orchestrator | + availability_zone = "nova" 2026-03-18 01:18:15.171145 | orchestrator | + config_drive = true 2026-03-18 01:18:15.171164 | orchestrator | + created = (known after apply) 2026-03-18 01:18:15.171169 | orchestrator | + flavor_id = (known after apply) 2026-03-18 01:18:15.171172 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-18 01:18:15.171325 | orchestrator | + force_delete = false 2026-03-18 01:18:15.171692 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-18 01:18:15.171877 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.171986 | orchestrator | + image_id = (known after apply) 2026-03-18 01:18:15.172142 | orchestrator | + image_name = (known after apply) 2026-03-18 01:18:15.172242 | orchestrator | + key_pair = "testbed" 2026-03-18 01:18:15.172344 | orchestrator | + name = "testbed-node-5" 2026-03-18 01:18:15.172466 | orchestrator | + power_state = "active" 2026-03-18 01:18:15.172536 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.172739 | orchestrator | + security_groups = (known after apply) 2026-03-18 01:18:15.172953 | orchestrator | + stop_before_destroy = false 2026-03-18 01:18:15.173075 | orchestrator | + updated = (known after apply) 2026-03-18 01:18:15.173255 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-18 01:18:15.173431 | orchestrator | 2026-03-18 01:18:15.173542 | orchestrator | + block_device { 2026-03-18 01:18:15.173702 | orchestrator | + boot_index = 0 2026-03-18 01:18:15.173877 | orchestrator | + delete_on_termination = false 2026-03-18 01:18:15.173990 | orchestrator | + destination_type = "volume" 2026-03-18 01:18:15.174127 | orchestrator | + multiattach = false 2026-03-18 01:18:15.174449 | orchestrator | + source_type = "volume" 2026-03-18 01:18:15.174560 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.174684 | orchestrator | } 2026-03-18 01:18:15.174974 | orchestrator | 2026-03-18 01:18:15.175139 | orchestrator | + network { 2026-03-18 01:18:15.175214 | orchestrator | + access_network = false 2026-03-18 01:18:15.175332 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-18 01:18:15.175386 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-18 01:18:15.175391 | orchestrator | + mac = (known after apply) 2026-03-18 01:18:15.175395 | orchestrator | + name = (known after apply) 2026-03-18 01:18:15.175444 | orchestrator | + port = (known after apply) 2026-03-18 01:18:15.175570 | orchestrator | + uuid = (known after apply) 2026-03-18 01:18:15.175674 | orchestrator | } 2026-03-18 01:18:15.175725 | orchestrator | } 2026-03-18 01:18:15.175908 | orchestrator | 2026-03-18 01:18:15.176017 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-18 01:18:15.176134 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-18 01:18:15.176197 | orchestrator | + fingerprint = (known after apply) 2026-03-18 01:18:15.176271 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.176302 | orchestrator | + name = "testbed" 2026-03-18 01:18:15.176335 | orchestrator | + private_key = (sensitive value) 2026-03-18 01:18:15.176500 | orchestrator | + public_key = (known after apply) 2026-03-18 01:18:15.176534 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.176706 | orchestrator | + user_id = (known after apply) 2026-03-18 01:18:15.176761 | orchestrator | } 2026-03-18 01:18:15.176819 | orchestrator | 2026-03-18 01:18:15.176883 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-18 01:18:15.176945 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177082 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177167 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177221 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177277 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177291 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177295 | orchestrator | } 2026-03-18 01:18:15.177299 | orchestrator | 2026-03-18 01:18:15.177303 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-18 01:18:15.177307 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177311 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177315 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177319 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177323 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177327 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177330 | orchestrator | } 2026-03-18 01:18:15.177334 | orchestrator | 2026-03-18 01:18:15.177338 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-18 01:18:15.177342 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177346 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177350 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177353 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177357 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177361 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177365 | orchestrator | } 2026-03-18 01:18:15.177368 | orchestrator | 2026-03-18 01:18:15.177372 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-18 01:18:15.177376 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177380 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177384 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177387 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177391 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177395 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177399 | orchestrator | } 2026-03-18 01:18:15.177403 | orchestrator | 2026-03-18 01:18:15.177407 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-18 01:18:15.177411 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177414 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177418 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177422 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177426 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177429 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177433 | orchestrator | } 2026-03-18 01:18:15.177437 | orchestrator | 2026-03-18 01:18:15.177441 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-18 01:18:15.177445 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177448 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177452 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177456 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177460 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177463 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177467 | orchestrator | } 2026-03-18 01:18:15.177471 | orchestrator | 2026-03-18 01:18:15.177475 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-18 01:18:15.177479 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177483 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177486 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177490 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177494 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177502 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177506 | orchestrator | } 2026-03-18 01:18:15.177510 | orchestrator | 2026-03-18 01:18:15.177514 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-18 01:18:15.177517 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177521 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177525 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177529 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177533 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177536 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177540 | orchestrator | } 2026-03-18 01:18:15.177544 | orchestrator | 2026-03-18 01:18:15.177548 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-18 01:18:15.177552 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-18 01:18:15.177555 | orchestrator | + device = (known after apply) 2026-03-18 01:18:15.177559 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177563 | orchestrator | + instance_id = (known after apply) 2026-03-18 01:18:15.177567 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177570 | orchestrator | + volume_id = (known after apply) 2026-03-18 01:18:15.177574 | orchestrator | } 2026-03-18 01:18:15.177578 | orchestrator | 2026-03-18 01:18:15.177582 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-18 01:18:15.177586 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-18 01:18:15.177590 | orchestrator | + fixed_ip = (known after apply) 2026-03-18 01:18:15.177594 | orchestrator | + floating_ip = (known after apply) 2026-03-18 01:18:15.177598 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177609 | orchestrator | + port_id = (known after apply) 2026-03-18 01:18:15.177613 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177617 | orchestrator | } 2026-03-18 01:18:15.177621 | orchestrator | 2026-03-18 01:18:15.177624 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-18 01:18:15.177628 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-18 01:18:15.177632 | orchestrator | + address = (known after apply) 2026-03-18 01:18:15.177636 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.177643 | orchestrator | + dns_domain = (known after apply) 2026-03-18 01:18:15.177647 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.177650 | orchestrator | + fixed_ip = (known after apply) 2026-03-18 01:18:15.177668 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177672 | orchestrator | + pool = "public" 2026-03-18 01:18:15.177676 | orchestrator | + port_id = (known after apply) 2026-03-18 01:18:15.177680 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177684 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.177687 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.177691 | orchestrator | } 2026-03-18 01:18:15.177695 | orchestrator | 2026-03-18 01:18:15.177699 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-18 01:18:15.177703 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-18 01:18:15.177707 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.177710 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.177714 | orchestrator | + availability_zone_hints = [ 2026-03-18 01:18:15.177718 | orchestrator | + "nova", 2026-03-18 01:18:15.177722 | orchestrator | ] 2026-03-18 01:18:15.177726 | orchestrator | + dns_domain = (known after apply) 2026-03-18 01:18:15.177730 | orchestrator | + external = (known after apply) 2026-03-18 01:18:15.177733 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177737 | orchestrator | + mtu = (known after apply) 2026-03-18 01:18:15.177741 | orchestrator | + name = "net-testbed-management" 2026-03-18 01:18:15.177745 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.177753 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.177757 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177761 | orchestrator | + shared = (known after apply) 2026-03-18 01:18:15.177764 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.177768 | orchestrator | + transparent_vlan = (known after apply) 2026-03-18 01:18:15.177772 | orchestrator | 2026-03-18 01:18:15.177776 | orchestrator | + segments (known after apply) 2026-03-18 01:18:15.177780 | orchestrator | } 2026-03-18 01:18:15.177784 | orchestrator | 2026-03-18 01:18:15.177787 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-18 01:18:15.177791 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-18 01:18:15.177795 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.177799 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.177803 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.177806 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.177810 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.177814 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.177818 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.177822 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.177826 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.177829 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.177833 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.177837 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.177841 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.177845 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.177916 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.178114 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.178162 | orchestrator | 2026-03-18 01:18:15.178167 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.178171 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.178203 | orchestrator | } 2026-03-18 01:18:15.178322 | orchestrator | 2026-03-18 01:18:15.178405 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.178503 | orchestrator | 2026-03-18 01:18:15.178581 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.178735 | orchestrator | + ip_address = "192.168.16.5" 2026-03-18 01:18:15.178832 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.178893 | orchestrator | } 2026-03-18 01:18:15.178933 | orchestrator | } 2026-03-18 01:18:15.179066 | orchestrator | 2026-03-18 01:18:15.179315 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-18 01:18:15.179439 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.179568 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.179705 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.179718 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.179832 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.179922 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.180023 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.180099 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.180176 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.180199 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.180503 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.180770 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.180859 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.180937 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.181032 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.181106 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.181110 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.181114 | orchestrator | 2026-03-18 01:18:15.181118 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181122 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.181126 | orchestrator | } 2026-03-18 01:18:15.181129 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181133 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.181137 | orchestrator | } 2026-03-18 01:18:15.181141 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181156 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.181160 | orchestrator | } 2026-03-18 01:18:15.181164 | orchestrator | 2026-03-18 01:18:15.181168 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.181171 | orchestrator | 2026-03-18 01:18:15.181175 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.181179 | orchestrator | + ip_address = "192.168.16.10" 2026-03-18 01:18:15.181183 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.181187 | orchestrator | } 2026-03-18 01:18:15.181191 | orchestrator | } 2026-03-18 01:18:15.181195 | orchestrator | 2026-03-18 01:18:15.181199 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-18 01:18:15.181203 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.181212 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.181216 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.181220 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.181224 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.181228 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.181232 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.181235 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.181239 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.181243 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.181247 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.181251 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.181254 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.181258 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.181262 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.181266 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.181270 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.181273 | orchestrator | 2026-03-18 01:18:15.181277 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181281 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.181285 | orchestrator | } 2026-03-18 01:18:15.181289 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181293 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.181297 | orchestrator | } 2026-03-18 01:18:15.181300 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181304 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.181308 | orchestrator | } 2026-03-18 01:18:15.181312 | orchestrator | 2026-03-18 01:18:15.181316 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.181320 | orchestrator | 2026-03-18 01:18:15.181323 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.181327 | orchestrator | + ip_address = "192.168.16.11" 2026-03-18 01:18:15.181331 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.181335 | orchestrator | } 2026-03-18 01:18:15.181339 | orchestrator | } 2026-03-18 01:18:15.181342 | orchestrator | 2026-03-18 01:18:15.181346 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-18 01:18:15.181350 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.181354 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.181358 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.181362 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.181365 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.181376 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.181383 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.181389 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.181396 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.181402 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.181409 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.181415 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.181421 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.181427 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.181433 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.181439 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.181447 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.181453 | orchestrator | 2026-03-18 01:18:15.181459 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181465 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.181471 | orchestrator | } 2026-03-18 01:18:15.181477 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181483 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.181488 | orchestrator | } 2026-03-18 01:18:15.181494 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181500 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.181506 | orchestrator | } 2026-03-18 01:18:15.181512 | orchestrator | 2026-03-18 01:18:15.181517 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.181523 | orchestrator | 2026-03-18 01:18:15.181529 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.181534 | orchestrator | + ip_address = "192.168.16.12" 2026-03-18 01:18:15.181540 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.181546 | orchestrator | } 2026-03-18 01:18:15.181552 | orchestrator | } 2026-03-18 01:18:15.181557 | orchestrator | 2026-03-18 01:18:15.181563 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-18 01:18:15.181569 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.181575 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.181581 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.181588 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.181594 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.181600 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.181607 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.181613 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.181619 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.181625 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.181632 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.181639 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.181645 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.181651 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.181690 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.181698 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.181704 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.181711 | orchestrator | 2026-03-18 01:18:15.181717 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181723 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.181730 | orchestrator | } 2026-03-18 01:18:15.181745 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181752 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.181759 | orchestrator | } 2026-03-18 01:18:15.181766 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181772 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.181778 | orchestrator | } 2026-03-18 01:18:15.181784 | orchestrator | 2026-03-18 01:18:15.181802 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.181808 | orchestrator | 2026-03-18 01:18:15.181814 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.181820 | orchestrator | + ip_address = "192.168.16.13" 2026-03-18 01:18:15.181826 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.181833 | orchestrator | } 2026-03-18 01:18:15.181839 | orchestrator | } 2026-03-18 01:18:15.181845 | orchestrator | 2026-03-18 01:18:15.181851 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-18 01:18:15.181855 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.181859 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.181863 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.181867 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.181870 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.181874 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.181878 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.181882 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.181885 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.181894 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.181898 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.181902 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.181906 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.181909 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.181913 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.181917 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.181921 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.181926 | orchestrator | 2026-03-18 01:18:15.181930 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181936 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.181940 | orchestrator | } 2026-03-18 01:18:15.181944 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181947 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.181951 | orchestrator | } 2026-03-18 01:18:15.181955 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.181959 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.181962 | orchestrator | } 2026-03-18 01:18:15.181966 | orchestrator | 2026-03-18 01:18:15.181970 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.181974 | orchestrator | 2026-03-18 01:18:15.181977 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.181981 | orchestrator | + ip_address = "192.168.16.14" 2026-03-18 01:18:15.181985 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.181989 | orchestrator | } 2026-03-18 01:18:15.181993 | orchestrator | } 2026-03-18 01:18:15.181996 | orchestrator | 2026-03-18 01:18:15.182000 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-18 01:18:15.182004 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-18 01:18:15.182008 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.182011 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-18 01:18:15.182033 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-18 01:18:15.182037 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.182040 | orchestrator | + device_id = (known after apply) 2026-03-18 01:18:15.182044 | orchestrator | + device_owner = (known after apply) 2026-03-18 01:18:15.182048 | orchestrator | + dns_assignment = (known after apply) 2026-03-18 01:18:15.182051 | orchestrator | + dns_name = (known after apply) 2026-03-18 01:18:15.182055 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182059 | orchestrator | + mac_address = (known after apply) 2026-03-18 01:18:15.182063 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.182066 | orchestrator | + port_security_enabled = (known after apply) 2026-03-18 01:18:15.182070 | orchestrator | + qos_policy_id = (known after apply) 2026-03-18 01:18:15.182078 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182082 | orchestrator | + security_group_ids = (known after apply) 2026-03-18 01:18:15.182086 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182090 | orchestrator | 2026-03-18 01:18:15.182093 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.182097 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-18 01:18:15.182101 | orchestrator | } 2026-03-18 01:18:15.182105 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.182108 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-18 01:18:15.182113 | orchestrator | } 2026-03-18 01:18:15.182116 | orchestrator | + allowed_address_pairs { 2026-03-18 01:18:15.182120 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-18 01:18:15.182124 | orchestrator | } 2026-03-18 01:18:15.182128 | orchestrator | 2026-03-18 01:18:15.182131 | orchestrator | + binding (known after apply) 2026-03-18 01:18:15.182135 | orchestrator | 2026-03-18 01:18:15.182139 | orchestrator | + fixed_ip { 2026-03-18 01:18:15.182143 | orchestrator | + ip_address = "192.168.16.15" 2026-03-18 01:18:15.182147 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.182150 | orchestrator | } 2026-03-18 01:18:15.182154 | orchestrator | } 2026-03-18 01:18:15.182158 | orchestrator | 2026-03-18 01:18:15.182162 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-18 01:18:15.182166 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-18 01:18:15.182170 | orchestrator | + force_destroy = false 2026-03-18 01:18:15.182173 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182177 | orchestrator | + port_id = (known after apply) 2026-03-18 01:18:15.182181 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182185 | orchestrator | + router_id = (known after apply) 2026-03-18 01:18:15.182188 | orchestrator | + subnet_id = (known after apply) 2026-03-18 01:18:15.182192 | orchestrator | } 2026-03-18 01:18:15.182196 | orchestrator | 2026-03-18 01:18:15.182200 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-18 01:18:15.182204 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-18 01:18:15.182208 | orchestrator | + admin_state_up = (known after apply) 2026-03-18 01:18:15.182211 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.182215 | orchestrator | + availability_zone_hints = [ 2026-03-18 01:18:15.182219 | orchestrator | + "nova", 2026-03-18 01:18:15.182223 | orchestrator | ] 2026-03-18 01:18:15.182227 | orchestrator | + distributed = (known after apply) 2026-03-18 01:18:15.182230 | orchestrator | + enable_snat = (known after apply) 2026-03-18 01:18:15.182234 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-18 01:18:15.182238 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-18 01:18:15.182248 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182252 | orchestrator | + name = "testbed" 2026-03-18 01:18:15.182256 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182260 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182264 | orchestrator | 2026-03-18 01:18:15.182267 | orchestrator | + external_fixed_ip (known after apply) 2026-03-18 01:18:15.182271 | orchestrator | } 2026-03-18 01:18:15.182275 | orchestrator | 2026-03-18 01:18:15.182279 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-18 01:18:15.182284 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-18 01:18:15.182288 | orchestrator | + description = "ssh" 2026-03-18 01:18:15.182291 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182295 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182299 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182303 | orchestrator | + port_range_max = 22 2026-03-18 01:18:15.182307 | orchestrator | + port_range_min = 22 2026-03-18 01:18:15.182311 | orchestrator | + protocol = "tcp" 2026-03-18 01:18:15.182314 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182322 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182326 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182329 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182333 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182337 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182341 | orchestrator | } 2026-03-18 01:18:15.182345 | orchestrator | 2026-03-18 01:18:15.182348 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-18 01:18:15.182352 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-18 01:18:15.182356 | orchestrator | + description = "wireguard" 2026-03-18 01:18:15.182360 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182364 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182367 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182371 | orchestrator | + port_range_max = 51820 2026-03-18 01:18:15.182375 | orchestrator | + port_range_min = 51820 2026-03-18 01:18:15.182379 | orchestrator | + protocol = "udp" 2026-03-18 01:18:15.182383 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182386 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182390 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182394 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182398 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182401 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182405 | orchestrator | } 2026-03-18 01:18:15.182409 | orchestrator | 2026-03-18 01:18:15.182413 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-18 01:18:15.182416 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-18 01:18:15.182423 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182427 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182431 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182435 | orchestrator | + protocol = "tcp" 2026-03-18 01:18:15.182439 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182443 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182446 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182450 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-18 01:18:15.182454 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182458 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182462 | orchestrator | } 2026-03-18 01:18:15.182466 | orchestrator | 2026-03-18 01:18:15.182469 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-18 01:18:15.182473 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-18 01:18:15.182477 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182481 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182485 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182489 | orchestrator | + protocol = "udp" 2026-03-18 01:18:15.182492 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182496 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182500 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182504 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-18 01:18:15.182508 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182511 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182515 | orchestrator | } 2026-03-18 01:18:15.182519 | orchestrator | 2026-03-18 01:18:15.182523 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-18 01:18:15.182530 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-18 01:18:15.182534 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182537 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182541 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182545 | orchestrator | + protocol = "icmp" 2026-03-18 01:18:15.182549 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182552 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182556 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182560 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182564 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182568 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182572 | orchestrator | } 2026-03-18 01:18:15.182575 | orchestrator | 2026-03-18 01:18:15.182579 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-18 01:18:15.182583 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-18 01:18:15.182587 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182594 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182598 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182601 | orchestrator | + protocol = "tcp" 2026-03-18 01:18:15.182605 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182609 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182613 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182617 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182621 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182624 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182628 | orchestrator | } 2026-03-18 01:18:15.182632 | orchestrator | 2026-03-18 01:18:15.182636 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-18 01:18:15.182640 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-18 01:18:15.182643 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182647 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182651 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182671 | orchestrator | + protocol = "udp" 2026-03-18 01:18:15.182678 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182682 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182686 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182690 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182694 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182698 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182701 | orchestrator | } 2026-03-18 01:18:15.182705 | orchestrator | 2026-03-18 01:18:15.182709 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-18 01:18:15.182713 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-18 01:18:15.182717 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182720 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182724 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182728 | orchestrator | + protocol = "icmp" 2026-03-18 01:18:15.182732 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182735 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182739 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182743 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182747 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182750 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182758 | orchestrator | } 2026-03-18 01:18:15.182761 | orchestrator | 2026-03-18 01:18:15.182765 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-18 01:18:15.182769 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-18 01:18:15.182773 | orchestrator | + description = "vrrp" 2026-03-18 01:18:15.182777 | orchestrator | + direction = "ingress" 2026-03-18 01:18:15.182780 | orchestrator | + ethertype = "IPv4" 2026-03-18 01:18:15.182784 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182788 | orchestrator | + protocol = "112" 2026-03-18 01:18:15.182792 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182795 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-18 01:18:15.182799 | orchestrator | + remote_group_id = (known after apply) 2026-03-18 01:18:15.182803 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-18 01:18:15.182807 | orchestrator | + security_group_id = (known after apply) 2026-03-18 01:18:15.182810 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182814 | orchestrator | } 2026-03-18 01:18:15.182818 | orchestrator | 2026-03-18 01:18:15.182822 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-18 01:18:15.182826 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-18 01:18:15.182830 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.182834 | orchestrator | + description = "management security group" 2026-03-18 01:18:15.182837 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182841 | orchestrator | + name = "testbed-management" 2026-03-18 01:18:15.182845 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182849 | orchestrator | + stateful = (known after apply) 2026-03-18 01:18:15.182853 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182856 | orchestrator | } 2026-03-18 01:18:15.182860 | orchestrator | 2026-03-18 01:18:15.182864 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-18 01:18:15.182868 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-18 01:18:15.182872 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.182878 | orchestrator | + description = "node security group" 2026-03-18 01:18:15.182884 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182890 | orchestrator | + name = "testbed-node" 2026-03-18 01:18:15.182895 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.182901 | orchestrator | + stateful = (known after apply) 2026-03-18 01:18:15.182907 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.182913 | orchestrator | } 2026-03-18 01:18:15.182919 | orchestrator | 2026-03-18 01:18:15.182924 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-18 01:18:15.182930 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-18 01:18:15.182935 | orchestrator | + all_tags = (known after apply) 2026-03-18 01:18:15.182941 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-18 01:18:15.182947 | orchestrator | + dns_nameservers = [ 2026-03-18 01:18:15.182954 | orchestrator | + "8.8.8.8", 2026-03-18 01:18:15.182961 | orchestrator | + "9.9.9.9", 2026-03-18 01:18:15.182967 | orchestrator | ] 2026-03-18 01:18:15.182973 | orchestrator | + enable_dhcp = true 2026-03-18 01:18:15.182979 | orchestrator | + gateway_ip = (known after apply) 2026-03-18 01:18:15.182985 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.182989 | orchestrator | + ip_version = 4 2026-03-18 01:18:15.182993 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-18 01:18:15.182997 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-18 01:18:15.183001 | orchestrator | + name = "subnet-testbed-management" 2026-03-18 01:18:15.183007 | orchestrator | + network_id = (known after apply) 2026-03-18 01:18:15.183017 | orchestrator | + no_gateway = false 2026-03-18 01:18:15.183023 | orchestrator | + region = (known after apply) 2026-03-18 01:18:15.183029 | orchestrator | + service_types = (known after apply) 2026-03-18 01:18:15.183050 | orchestrator | + tenant_id = (known after apply) 2026-03-18 01:18:15.183057 | orchestrator | 2026-03-18 01:18:15.183063 | orchestrator | + allocation_pool { 2026-03-18 01:18:15.183069 | orchestrator | + end = "192.168.31.250" 2026-03-18 01:18:15.183075 | orchestrator | + start = "192.168.31.200" 2026-03-18 01:18:15.183079 | orchestrator | } 2026-03-18 01:18:15.183083 | orchestrator | } 2026-03-18 01:18:15.183086 | orchestrator | 2026-03-18 01:18:15.183090 | orchestrator | # terraform_data.image will be created 2026-03-18 01:18:15.183094 | orchestrator | + resource "terraform_data" "image" { 2026-03-18 01:18:15.183098 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.183104 | orchestrator | + input = "Ubuntu 24.04" 2026-03-18 01:18:15.183110 | orchestrator | + output = (known after apply) 2026-03-18 01:18:15.183117 | orchestrator | } 2026-03-18 01:18:15.183123 | orchestrator | 2026-03-18 01:18:15.183129 | orchestrator | # terraform_data.image_node will be created 2026-03-18 01:18:15.183135 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-18 01:18:15.183142 | orchestrator | + id = (known after apply) 2026-03-18 01:18:15.183148 | orchestrator | + input = "Ubuntu 24.04" 2026-03-18 01:18:15.183154 | orchestrator | + output = (known after apply) 2026-03-18 01:18:15.183160 | orchestrator | } 2026-03-18 01:18:15.183167 | orchestrator | 2026-03-18 01:18:15.183173 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-18 01:18:15.183179 | orchestrator | 2026-03-18 01:18:15.183185 | orchestrator | Changes to Outputs: 2026-03-18 01:18:15.183189 | orchestrator | + manager_address = (sensitive value) 2026-03-18 01:18:15.183193 | orchestrator | + private_key = (sensitive value) 2026-03-18 01:18:15.441349 | orchestrator | terraform_data.image_node: Creating... 2026-03-18 01:18:15.441794 | orchestrator | terraform_data.image: Creating... 2026-03-18 01:18:15.441934 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=983075c0-0c2b-1c14-0749-0495ffcdfd91] 2026-03-18 01:18:15.442379 | orchestrator | terraform_data.image: Creation complete after 0s [id=32b8069e-3ac8-4101-4a7e-13cccf61f0eb] 2026-03-18 01:18:15.469835 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-18 01:18:15.475599 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-18 01:18:15.475967 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-18 01:18:15.476698 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-18 01:18:15.476809 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-18 01:18:15.477787 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-18 01:18:15.478289 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-18 01:18:15.479238 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-18 01:18:15.480289 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-18 01:18:15.485005 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-18 01:18:15.934871 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-18 01:18:15.942039 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-18 01:18:16.511070 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=405b3c3b-f1c1-48bb-aba5-1eebef0f96d1] 2026-03-18 01:18:16.513653 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-18 01:18:16.564833 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-18 01:18:16.569160 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-18 01:18:16.664069 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-18 01:18:16.673755 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-18 01:18:19.129244 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a] 2026-03-18 01:18:19.133014 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-18 01:18:19.138568 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ebabc839-a277-44fc-abeb-49fc313c2e1e] 2026-03-18 01:18:19.143571 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-18 01:18:19.162332 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=80734d97-478b-4a5e-879f-889cd258efbc] 2026-03-18 01:18:19.171418 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-18 01:18:19.173370 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=54344bae-1dab-46bd-b563-a8bed09fd568] 2026-03-18 01:18:19.180095 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-18 01:18:19.183302 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=92bad715-eec0-475b-8af5-3664f3458c00] 2026-03-18 01:18:19.183862 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=343cfa22-0406-40f4-a0e7-97fc1bbcc216] 2026-03-18 01:18:19.191076 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-18 01:18:19.192951 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-18 01:18:19.225162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=3c07f10e-07ed-4136-af5a-52ab111aa768] 2026-03-18 01:18:19.229924 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-18 01:18:19.253952 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=26f175df-aba2-4da2-ab55-e525c2d3b7aa] 2026-03-18 01:18:19.263885 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-18 01:18:19.269012 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=8d3d61674b86f0ec49032d519f1080d206c238c8] 2026-03-18 01:18:19.269497 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f4bc8da1-65d0-4f2a-8066-3fa706e86a6a] 2026-03-18 01:18:19.276474 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-18 01:18:19.281737 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=93994b10ab23c37a898c08119a1e439c9c1e24e9] 2026-03-18 01:18:19.979792 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=80ceaeba-9674-4dab-afdc-a02b859856c8] 2026-03-18 01:18:19.987523 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-18 01:18:20.032146 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=93c1740b-d129-4fb1-8a5c-0a256369ea5e] 2026-03-18 01:18:22.538116 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=d04444e1-2fbb-477e-b996-d330c703cca0] 2026-03-18 01:18:22.551634 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=248efa21-e866-4de7-b593-1e4360051f6d] 2026-03-18 01:18:22.572634 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=1c5784ed-a5cf-4a45-b5e2-476691d23561] 2026-03-18 01:18:22.593696 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=a74f897f-9956-4887-8b8d-6711f76e2ca2] 2026-03-18 01:18:22.626132 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=15119f5e-a47f-40dc-b692-43d931272403] 2026-03-18 01:18:22.626216 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=bbfcb729-d4f0-4316-9872-0560f57ec1dc] 2026-03-18 01:18:23.045037 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=3b92b68e-4062-40ee-8aae-ff976757095a] 2026-03-18 01:18:23.050360 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-18 01:18:23.051303 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-18 01:18:23.052274 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-18 01:18:23.264803 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e074d7e8-ccdc-4819-b64c-2898aaf3cc30] 2026-03-18 01:18:23.273647 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-18 01:18:23.273770 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-18 01:18:23.274088 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=aead63cb-c09b-4a05-bbdc-c47106b19e42] 2026-03-18 01:18:23.275433 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-18 01:18:23.276181 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-18 01:18:23.278682 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-18 01:18:23.279600 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-18 01:18:23.284714 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-18 01:18:23.288963 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-18 01:18:23.290380 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-18 01:18:23.418406 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=0bfcd3c6-0de3-42b9-9c54-104204623bf1] 2026-03-18 01:18:23.426110 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-18 01:18:23.456410 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=2b429445-4843-46e3-a0ec-c964d39c020d] 2026-03-18 01:18:23.468711 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-18 01:18:23.575912 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=de6eac72-86ab-4d2a-ab13-38fed5b26a42] 2026-03-18 01:18:23.584935 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-18 01:18:23.616614 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=1faa06f7-0b05-44e1-b7e8-2a8b7f3645f2] 2026-03-18 01:18:23.627295 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-18 01:18:23.843827 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=8b789fa3-33b9-4166-8e5c-6f8a5d2d2f11] 2026-03-18 01:18:23.846434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=0ae8155f-96c5-4e21-948d-84f426060dfb] 2026-03-18 01:18:23.856007 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-18 01:18:23.859364 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-18 01:18:24.080959 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=fbb0d3f9-d9cf-4daa-b8a3-f4cc95479d6b] 2026-03-18 01:18:24.085160 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-18 01:18:24.211620 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=9c529f26-0b10-42a2-8576-85513ffbfbaf] 2026-03-18 01:18:24.272809 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=00f642c9-e49b-43ce-be9e-c1a1af064340] 2026-03-18 01:18:24.380549 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=493256cb-443f-4165-ae2d-196cc45531d7] 2026-03-18 01:18:24.385915 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=cc9a090a-535b-4903-a003-5cbe563a06d4] 2026-03-18 01:18:24.392653 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=dc52a224-7163-4e2c-bb9f-c42f6d47e26b] 2026-03-18 01:18:24.543019 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=c49e64de-c599-4ca3-9a74-35ee18a86c9b] 2026-03-18 01:18:24.567903 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=3d02d622-92a6-4c39-9acd-0c68d3451ccf] 2026-03-18 01:18:24.568852 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=a48d3192-bbd7-4d29-a6ae-161c1a80e06e] 2026-03-18 01:18:24.768907 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=3eac6123-9610-499c-83ec-c00dda37c847] 2026-03-18 01:18:25.528317 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=13fa8238-f0dd-4592-81a5-dae708af8f17] 2026-03-18 01:18:25.652065 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-18 01:18:25.652134 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-18 01:18:25.652143 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-18 01:18:25.652151 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-18 01:18:25.652159 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-18 01:18:25.652167 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-18 01:18:25.652175 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-18 01:18:27.169777 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=3c39e053-8db2-4ebd-b849-62bbd31ffcfb] 2026-03-18 01:18:27.177617 | orchestrator | local_file.inventory: Creating... 2026-03-18 01:18:27.184132 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-18 01:18:27.184355 | orchestrator | local_file.inventory: Creation complete after 0s [id=c06545f14a9743c4848470a02a493433fc2490c6] 2026-03-18 01:18:27.185083 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-18 01:18:27.190140 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=70a4d2e753d6bed094beae95fbd49df2c2802393] 2026-03-18 01:18:27.910249 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=3c39e053-8db2-4ebd-b849-62bbd31ffcfb] 2026-03-18 01:18:35.565031 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-18 01:18:35.566100 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-18 01:18:35.567243 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-18 01:18:35.567309 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-18 01:18:35.574735 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-18 01:18:35.583217 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-18 01:18:45.565428 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-18 01:18:45.566609 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-18 01:18:45.567804 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-18 01:18:45.567833 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-18 01:18:45.575408 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-18 01:18:45.583798 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-18 01:18:45.923375 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=5295ddf9-b794-4b9f-b24e-7bc0a81ad33c] 2026-03-18 01:18:45.941927 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=da29e566-c7a8-4438-9106-1140659f4ffa] 2026-03-18 01:18:45.979314 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=028a892c-43ab-45a2-aed2-b41fca581821] 2026-03-18 01:18:45.997579 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=b5315ed0-038e-4355-ae8a-f356647694ab] 2026-03-18 01:18:55.575990 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-18 01:18:55.584325 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-18 01:18:56.139860 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=a887d45a-e881-40b6-892d-013d491dcf67] 2026-03-18 01:18:56.201022 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=c5a60784-031b-44df-8397-70eee283a738] 2026-03-18 01:18:56.220267 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-18 01:18:56.222816 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-18 01:18:56.232941 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8582077039575787276] 2026-03-18 01:18:56.235363 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-18 01:18:56.235441 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-18 01:18:56.242283 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-18 01:18:56.252037 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-18 01:18:56.256475 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-18 01:18:56.256625 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-18 01:18:56.262110 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-18 01:18:56.266554 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-18 01:18:56.281642 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-18 01:18:59.658752 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=028a892c-43ab-45a2-aed2-b41fca581821/26f175df-aba2-4da2-ab55-e525c2d3b7aa] 2026-03-18 01:18:59.658885 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=b5315ed0-038e-4355-ae8a-f356647694ab/9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a] 2026-03-18 01:18:59.675723 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=da29e566-c7a8-4438-9106-1140659f4ffa/343cfa22-0406-40f4-a0e7-97fc1bbcc216] 2026-03-18 01:18:59.680399 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=028a892c-43ab-45a2-aed2-b41fca581821/ebabc839-a277-44fc-abeb-49fc313c2e1e] 2026-03-18 01:18:59.704445 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=b5315ed0-038e-4355-ae8a-f356647694ab/f4bc8da1-65d0-4f2a-8066-3fa706e86a6a] 2026-03-18 01:18:59.735454 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=da29e566-c7a8-4438-9106-1140659f4ffa/54344bae-1dab-46bd-b563-a8bed09fd568] 2026-03-18 01:19:05.776741 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=028a892c-43ab-45a2-aed2-b41fca581821/3c07f10e-07ed-4136-af5a-52ab111aa768] 2026-03-18 01:19:05.818377 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=da29e566-c7a8-4438-9106-1140659f4ffa/92bad715-eec0-475b-8af5-3664f3458c00] 2026-03-18 01:19:05.819576 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=b5315ed0-038e-4355-ae8a-f356647694ab/80734d97-478b-4a5e-879f-889cd258efbc] 2026-03-18 01:19:06.285395 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-18 01:19:16.286917 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-18 01:19:16.567457 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=020a8f99-16ac-4b6c-96c6-e69ef79310a3] 2026-03-18 01:19:16.583731 | orchestrator | 2026-03-18 01:19:16.583796 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-18 01:19:16.583803 | orchestrator | 2026-03-18 01:19:16.583808 | orchestrator | Outputs: 2026-03-18 01:19:16.583812 | orchestrator | 2026-03-18 01:19:16.583824 | orchestrator | manager_address = 2026-03-18 01:19:16.583828 | orchestrator | private_key = 2026-03-18 01:19:16.787811 | orchestrator | ok: Runtime: 0:01:06.369174 2026-03-18 01:19:16.818941 | 2026-03-18 01:19:16.819072 | TASK [Fetch manager address] 2026-03-18 01:19:17.265922 | orchestrator | ok 2026-03-18 01:19:17.277126 | 2026-03-18 01:19:17.277363 | TASK [Set manager_host address] 2026-03-18 01:19:17.355805 | orchestrator | ok 2026-03-18 01:19:17.364116 | 2026-03-18 01:19:17.364230 | LOOP [Update ansible collections] 2026-03-18 01:19:18.372488 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-18 01:19:18.372835 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-18 01:19:18.372887 | orchestrator | Starting galaxy collection install process 2026-03-18 01:19:18.372923 | orchestrator | Process install dependency map 2026-03-18 01:19:18.372974 | orchestrator | Starting collection install process 2026-03-18 01:19:18.373058 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-03-18 01:19:18.373103 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-03-18 01:19:18.373153 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-18 01:19:18.373237 | orchestrator | ok: Item: commons Runtime: 0:00:00.647692 2026-03-18 01:19:19.540686 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-18 01:19:19.540866 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-18 01:19:19.540916 | orchestrator | Starting galaxy collection install process 2026-03-18 01:19:19.540958 | orchestrator | Process install dependency map 2026-03-18 01:19:19.541016 | orchestrator | Starting collection install process 2026-03-18 01:19:19.541054 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-03-18 01:19:19.541089 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-03-18 01:19:19.541123 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-18 01:19:19.541178 | orchestrator | ok: Item: services Runtime: 0:00:00.824194 2026-03-18 01:19:19.563522 | 2026-03-18 01:19:19.563699 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-18 01:19:31.170736 | orchestrator | ok 2026-03-18 01:19:31.180663 | 2026-03-18 01:19:31.180793 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-18 01:20:31.224787 | orchestrator | ok 2026-03-18 01:20:31.235150 | 2026-03-18 01:20:31.235294 | TASK [Fetch manager ssh hostkey] 2026-03-18 01:20:32.816517 | orchestrator | Output suppressed because no_log was given 2026-03-18 01:20:32.826422 | 2026-03-18 01:20:32.826563 | TASK [Get ssh keypair from terraform environment] 2026-03-18 01:20:33.366794 | orchestrator | ok: Runtime: 0:00:00.010474 2026-03-18 01:20:33.378194 | 2026-03-18 01:20:33.378339 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-18 01:20:33.422617 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-18 01:20:33.432242 | 2026-03-18 01:20:33.432392 | TASK [Run manager part 0] 2026-03-18 01:20:34.409558 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-18 01:20:34.474088 | orchestrator | 2026-03-18 01:20:34.474144 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-18 01:20:34.474152 | orchestrator | 2026-03-18 01:20:34.474166 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-18 01:20:54.817039 | orchestrator | ok: [testbed-manager] 2026-03-18 01:20:54.817203 | orchestrator | 2026-03-18 01:20:54.817273 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-18 01:20:54.817313 | orchestrator | 2026-03-18 01:20:54.817338 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:20:56.807538 | orchestrator | ok: [testbed-manager] 2026-03-18 01:20:56.807601 | orchestrator | 2026-03-18 01:20:56.807609 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-18 01:20:57.531212 | orchestrator | ok: [testbed-manager] 2026-03-18 01:20:57.531260 | orchestrator | 2026-03-18 01:20:57.531268 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-18 01:20:57.573668 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.573730 | orchestrator | 2026-03-18 01:20:57.573761 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-18 01:20:57.602513 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.602596 | orchestrator | 2026-03-18 01:20:57.602605 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-18 01:20:57.631303 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.631378 | orchestrator | 2026-03-18 01:20:57.631386 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-18 01:20:57.664203 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.664263 | orchestrator | 2026-03-18 01:20:57.664270 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-18 01:20:57.698960 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.699027 | orchestrator | 2026-03-18 01:20:57.699038 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-18 01:20:57.733488 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.733583 | orchestrator | 2026-03-18 01:20:57.733605 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-18 01:20:57.772910 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:20:57.773000 | orchestrator | 2026-03-18 01:20:57.773018 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-18 01:20:58.635366 | orchestrator | changed: [testbed-manager] 2026-03-18 01:20:58.635421 | orchestrator | 2026-03-18 01:20:58.635427 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-18 01:23:55.695591 | orchestrator | changed: [testbed-manager] 2026-03-18 01:23:55.695677 | orchestrator | 2026-03-18 01:23:55.695693 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-18 01:25:16.126900 | orchestrator | changed: [testbed-manager] 2026-03-18 01:25:16.127121 | orchestrator | 2026-03-18 01:25:16.127141 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-18 01:25:38.627033 | orchestrator | changed: [testbed-manager] 2026-03-18 01:25:38.627145 | orchestrator | 2026-03-18 01:25:38.627169 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-18 01:25:48.081286 | orchestrator | changed: [testbed-manager] 2026-03-18 01:25:48.081360 | orchestrator | 2026-03-18 01:25:48.081377 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-18 01:25:48.126784 | orchestrator | ok: [testbed-manager] 2026-03-18 01:25:48.126872 | orchestrator | 2026-03-18 01:25:48.126898 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-18 01:25:48.922856 | orchestrator | ok: [testbed-manager] 2026-03-18 01:25:48.922927 | orchestrator | 2026-03-18 01:25:48.922939 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-18 01:25:49.713797 | orchestrator | changed: [testbed-manager] 2026-03-18 01:25:49.713849 | orchestrator | 2026-03-18 01:25:49.713861 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-18 01:25:57.108608 | orchestrator | changed: [testbed-manager] 2026-03-18 01:25:57.108656 | orchestrator | 2026-03-18 01:25:57.108685 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-18 01:26:03.957191 | orchestrator | changed: [testbed-manager] 2026-03-18 01:26:03.957286 | orchestrator | 2026-03-18 01:26:03.957305 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-18 01:26:07.085180 | orchestrator | changed: [testbed-manager] 2026-03-18 01:26:07.085291 | orchestrator | 2026-03-18 01:26:07.085314 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-18 01:26:09.082489 | orchestrator | changed: [testbed-manager] 2026-03-18 01:26:09.082553 | orchestrator | 2026-03-18 01:26:09.082560 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-18 01:26:10.207889 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-18 01:26:10.207984 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-18 01:26:10.208028 | orchestrator | 2026-03-18 01:26:10.208042 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-18 01:26:10.247805 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-18 01:26:10.247874 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-18 01:26:10.247888 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-18 01:26:10.247900 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-18 01:26:13.642541 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-18 01:26:13.642630 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-18 01:26:13.642644 | orchestrator | 2026-03-18 01:26:13.642655 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-18 01:26:14.218960 | orchestrator | changed: [testbed-manager] 2026-03-18 01:26:14.219044 | orchestrator | 2026-03-18 01:26:14.219054 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-18 01:29:34.436650 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-18 01:29:34.436760 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-18 01:29:34.436779 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-18 01:29:34.436792 | orchestrator | 2026-03-18 01:29:34.436804 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-18 01:29:36.870435 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-18 01:29:36.870478 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-18 01:29:36.870484 | orchestrator | 2026-03-18 01:29:36.870492 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-18 01:29:36.870502 | orchestrator | 2026-03-18 01:29:36.870510 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:29:38.368194 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:38.368284 | orchestrator | 2026-03-18 01:29:38.368304 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-18 01:29:38.410315 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:38.410414 | orchestrator | 2026-03-18 01:29:38.410431 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-18 01:29:38.490231 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:38.490309 | orchestrator | 2026-03-18 01:29:38.490319 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-18 01:29:39.327831 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:39.327875 | orchestrator | 2026-03-18 01:29:39.327884 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-18 01:29:40.089783 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:40.089892 | orchestrator | 2026-03-18 01:29:40.089910 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-18 01:29:41.543522 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-18 01:29:41.543570 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-18 01:29:41.543578 | orchestrator | 2026-03-18 01:29:41.543594 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-18 01:29:42.952063 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:42.952155 | orchestrator | 2026-03-18 01:29:42.952166 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-18 01:29:44.810746 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:29:44.810836 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-18 01:29:44.810851 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:29:44.810863 | orchestrator | 2026-03-18 01:29:44.810876 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-18 01:29:44.866980 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:44.867049 | orchestrator | 2026-03-18 01:29:44.867060 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-18 01:29:44.945886 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:44.945976 | orchestrator | 2026-03-18 01:29:44.945995 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-18 01:29:45.527784 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:45.527896 | orchestrator | 2026-03-18 01:29:45.527916 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-18 01:29:45.602246 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:45.602345 | orchestrator | 2026-03-18 01:29:45.602363 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-18 01:29:46.498201 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:29:46.498261 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:46.498271 | orchestrator | 2026-03-18 01:29:46.498279 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-18 01:29:46.539747 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:46.539789 | orchestrator | 2026-03-18 01:29:46.539798 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-18 01:29:46.572659 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:46.572701 | orchestrator | 2026-03-18 01:29:46.572710 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-18 01:29:46.609095 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:46.609274 | orchestrator | 2026-03-18 01:29:46.609305 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-18 01:29:46.681287 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:46.681380 | orchestrator | 2026-03-18 01:29:46.681396 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-18 01:29:47.506770 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:47.506865 | orchestrator | 2026-03-18 01:29:47.506881 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-18 01:29:47.506894 | orchestrator | 2026-03-18 01:29:47.506905 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:29:49.019192 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:49.019282 | orchestrator | 2026-03-18 01:29:49.019299 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-18 01:29:50.003994 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:50.004907 | orchestrator | 2026-03-18 01:29:50.004942 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:29:50.004957 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-18 01:29:50.004969 | orchestrator | 2026-03-18 01:29:50.300169 | orchestrator | ok: Runtime: 0:09:16.390155 2026-03-18 01:29:50.315360 | 2026-03-18 01:29:50.315494 | TASK [Point out that the log in on the manager is now possible] 2026-03-18 01:29:50.363522 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-18 01:29:50.373615 | 2026-03-18 01:29:50.373729 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-18 01:29:50.412271 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-18 01:29:50.422439 | 2026-03-18 01:29:50.422573 | TASK [Run manager part 1 + 2] 2026-03-18 01:29:51.277468 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-18 01:29:51.334581 | orchestrator | 2026-03-18 01:29:51.334628 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-18 01:29:51.334635 | orchestrator | 2026-03-18 01:29:51.334648 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:29:54.441387 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:54.441434 | orchestrator | 2026-03-18 01:29:54.441451 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-18 01:29:54.476214 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:54.476259 | orchestrator | 2026-03-18 01:29:54.476267 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-18 01:29:54.516480 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:54.516536 | orchestrator | 2026-03-18 01:29:54.516547 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-18 01:29:54.569492 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:54.569551 | orchestrator | 2026-03-18 01:29:54.569564 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-18 01:29:54.637631 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:54.637690 | orchestrator | 2026-03-18 01:29:54.637698 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-18 01:29:54.699339 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:54.699394 | orchestrator | 2026-03-18 01:29:54.699401 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-18 01:29:54.739214 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-18 01:29:54.739284 | orchestrator | 2026-03-18 01:29:54.739294 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-18 01:29:55.441573 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:55.441672 | orchestrator | 2026-03-18 01:29:55.441695 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-18 01:29:55.488230 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:29:55.488300 | orchestrator | 2026-03-18 01:29:55.488311 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-18 01:29:56.888021 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:56.888149 | orchestrator | 2026-03-18 01:29:56.888172 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-18 01:29:57.462043 | orchestrator | ok: [testbed-manager] 2026-03-18 01:29:57.462142 | orchestrator | 2026-03-18 01:29:57.462151 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-18 01:29:58.608724 | orchestrator | changed: [testbed-manager] 2026-03-18 01:29:58.608812 | orchestrator | 2026-03-18 01:29:58.608837 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-18 01:30:15.188576 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:15.188724 | orchestrator | 2026-03-18 01:30:15.188742 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-18 01:30:15.885185 | orchestrator | ok: [testbed-manager] 2026-03-18 01:30:15.885287 | orchestrator | 2026-03-18 01:30:15.885312 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-18 01:30:15.943382 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:30:15.943450 | orchestrator | 2026-03-18 01:30:15.943460 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-18 01:30:16.976692 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:16.976774 | orchestrator | 2026-03-18 01:30:16.976788 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-18 01:30:18.005793 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:18.005919 | orchestrator | 2026-03-18 01:30:18.005931 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-18 01:30:18.634143 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:18.634251 | orchestrator | 2026-03-18 01:30:18.634266 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-18 01:30:18.683521 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-18 01:30:18.683642 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-18 01:30:18.683659 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-18 01:30:18.683670 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-18 01:30:20.786848 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:20.786963 | orchestrator | 2026-03-18 01:30:20.786987 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-18 01:30:30.331758 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-18 01:30:30.331819 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-18 01:30:30.331828 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-18 01:30:30.331835 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-18 01:30:30.331848 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-18 01:30:30.331854 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-18 01:30:30.331860 | orchestrator | 2026-03-18 01:30:30.331866 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-18 01:30:31.421404 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:31.421524 | orchestrator | 2026-03-18 01:30:31.421543 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-18 01:30:31.469577 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:30:31.469672 | orchestrator | 2026-03-18 01:30:31.469688 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-18 01:30:34.809048 | orchestrator | changed: [testbed-manager] 2026-03-18 01:30:34.809196 | orchestrator | 2026-03-18 01:30:34.809225 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-18 01:30:34.854513 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:30:34.854611 | orchestrator | 2026-03-18 01:30:34.854628 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-18 01:32:24.151255 | orchestrator | changed: [testbed-manager] 2026-03-18 01:32:24.151301 | orchestrator | 2026-03-18 01:32:24.151309 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-18 01:32:25.421777 | orchestrator | ok: [testbed-manager] 2026-03-18 01:32:25.421861 | orchestrator | 2026-03-18 01:32:25.421875 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:32:25.421887 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-18 01:32:25.421900 | orchestrator | 2026-03-18 01:32:25.619546 | orchestrator | ok: Runtime: 0:02:34.812474 2026-03-18 01:32:25.631515 | 2026-03-18 01:32:25.631661 | TASK [Reboot manager] 2026-03-18 01:32:27.166741 | orchestrator | ok: Runtime: 0:00:01.019287 2026-03-18 01:32:27.181914 | 2026-03-18 01:32:27.182060 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-18 01:32:43.590404 | orchestrator | ok 2026-03-18 01:32:43.601235 | 2026-03-18 01:32:43.601372 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-18 01:33:43.644485 | orchestrator | ok 2026-03-18 01:33:43.653884 | 2026-03-18 01:33:43.654006 | TASK [Deploy manager + bootstrap nodes] 2026-03-18 01:33:46.464593 | orchestrator | 2026-03-18 01:33:46.464833 | orchestrator | # DEPLOY MANAGER 2026-03-18 01:33:46.464857 | orchestrator | 2026-03-18 01:33:46.464870 | orchestrator | + set -e 2026-03-18 01:33:46.464882 | orchestrator | + echo 2026-03-18 01:33:46.464894 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-18 01:33:46.464909 | orchestrator | + echo 2026-03-18 01:33:46.464957 | orchestrator | + cat /opt/manager-vars.sh 2026-03-18 01:33:46.468727 | orchestrator | export NUMBER_OF_NODES=6 2026-03-18 01:33:46.468809 | orchestrator | 2026-03-18 01:33:46.468827 | orchestrator | export CEPH_VERSION=reef 2026-03-18 01:33:46.468844 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-18 01:33:46.468858 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-18 01:33:46.468888 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-18 01:33:46.468901 | orchestrator | 2026-03-18 01:33:46.468923 | orchestrator | export ARA=false 2026-03-18 01:33:46.468939 | orchestrator | export DEPLOY_MODE=manager 2026-03-18 01:33:46.468960 | orchestrator | export TEMPEST=false 2026-03-18 01:33:46.468974 | orchestrator | export IS_ZUUL=true 2026-03-18 01:33:46.468982 | orchestrator | 2026-03-18 01:33:46.468996 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:33:46.469005 | orchestrator | export EXTERNAL_API=false 2026-03-18 01:33:46.469013 | orchestrator | 2026-03-18 01:33:46.469021 | orchestrator | export IMAGE_USER=ubuntu 2026-03-18 01:33:46.469032 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-18 01:33:46.469040 | orchestrator | 2026-03-18 01:33:46.469048 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-18 01:33:46.469056 | orchestrator | 2026-03-18 01:33:46.469064 | orchestrator | + echo 2026-03-18 01:33:46.469073 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 01:33:46.470349 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 01:33:46.470405 | orchestrator | ++ INTERACTIVE=false 2026-03-18 01:33:46.470414 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 01:33:46.470422 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 01:33:46.470749 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 01:33:46.470766 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 01:33:46.470780 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 01:33:46.470790 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 01:33:46.470798 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 01:33:46.470805 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 01:33:46.470821 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 01:33:46.470830 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 01:33:46.470840 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 01:33:46.470849 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 01:33:46.470882 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 01:33:46.470892 | orchestrator | ++ export ARA=false 2026-03-18 01:33:46.470904 | orchestrator | ++ ARA=false 2026-03-18 01:33:46.470944 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 01:33:46.470955 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 01:33:46.471431 | orchestrator | ++ export TEMPEST=false 2026-03-18 01:33:46.471451 | orchestrator | ++ TEMPEST=false 2026-03-18 01:33:46.471459 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 01:33:46.471466 | orchestrator | ++ IS_ZUUL=true 2026-03-18 01:33:46.471474 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:33:46.471482 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:33:46.471490 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 01:33:46.471503 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 01:33:46.471515 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 01:33:46.471522 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 01:33:46.471530 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 01:33:46.471537 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 01:33:46.471704 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 01:33:46.471718 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 01:33:46.471727 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-18 01:33:46.527300 | orchestrator | + docker version 2026-03-18 01:33:46.652647 | orchestrator | Client: Docker Engine - Community 2026-03-18 01:33:46.652770 | orchestrator | Version: 27.5.1 2026-03-18 01:33:46.652790 | orchestrator | API version: 1.47 2026-03-18 01:33:46.652802 | orchestrator | Go version: go1.22.11 2026-03-18 01:33:46.652813 | orchestrator | Git commit: 9f9e405 2026-03-18 01:33:46.652824 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-18 01:33:46.652836 | orchestrator | OS/Arch: linux/amd64 2026-03-18 01:33:46.652846 | orchestrator | Context: default 2026-03-18 01:33:46.652857 | orchestrator | 2026-03-18 01:33:46.652868 | orchestrator | Server: Docker Engine - Community 2026-03-18 01:33:46.652879 | orchestrator | Engine: 2026-03-18 01:33:46.652890 | orchestrator | Version: 27.5.1 2026-03-18 01:33:46.652901 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-18 01:33:46.652947 | orchestrator | Go version: go1.22.11 2026-03-18 01:33:46.652959 | orchestrator | Git commit: 4c9b3b0 2026-03-18 01:33:46.652970 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-18 01:33:46.652980 | orchestrator | OS/Arch: linux/amd64 2026-03-18 01:33:46.652991 | orchestrator | Experimental: false 2026-03-18 01:33:46.653002 | orchestrator | containerd: 2026-03-18 01:33:46.653026 | orchestrator | Version: v2.2.2 2026-03-18 01:33:46.653038 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-18 01:33:46.653049 | orchestrator | runc: 2026-03-18 01:33:46.653060 | orchestrator | Version: 1.3.4 2026-03-18 01:33:46.653071 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-18 01:33:46.653081 | orchestrator | docker-init: 2026-03-18 01:33:46.653092 | orchestrator | Version: 0.19.0 2026-03-18 01:33:46.653104 | orchestrator | GitCommit: de40ad0 2026-03-18 01:33:46.656070 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-18 01:33:46.663423 | orchestrator | + set -e 2026-03-18 01:33:46.663497 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 01:33:46.663505 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 01:33:46.663512 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 01:33:46.663517 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 01:33:46.663523 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 01:33:46.663528 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 01:33:46.663535 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 01:33:46.663540 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 01:33:46.663546 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 01:33:46.663552 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 01:33:46.663557 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 01:33:46.663563 | orchestrator | ++ export ARA=false 2026-03-18 01:33:46.663568 | orchestrator | ++ ARA=false 2026-03-18 01:33:46.663574 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 01:33:46.663579 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 01:33:46.663584 | orchestrator | ++ export TEMPEST=false 2026-03-18 01:33:46.663590 | orchestrator | ++ TEMPEST=false 2026-03-18 01:33:46.663595 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 01:33:46.663601 | orchestrator | ++ IS_ZUUL=true 2026-03-18 01:33:46.663606 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:33:46.663612 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:33:46.663617 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 01:33:46.663622 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 01:33:46.663627 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 01:33:46.663632 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 01:33:46.663638 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 01:33:46.663643 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 01:33:46.663648 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 01:33:46.663654 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 01:33:46.663659 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 01:33:46.663664 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 01:33:46.663669 | orchestrator | ++ INTERACTIVE=false 2026-03-18 01:33:46.663675 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 01:33:46.663683 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 01:33:46.663689 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-18 01:33:46.663694 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-18 01:33:46.670099 | orchestrator | + set -e 2026-03-18 01:33:46.670263 | orchestrator | + VERSION=9.5.0 2026-03-18 01:33:46.670278 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-18 01:33:46.680162 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-18 01:33:46.680296 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-18 01:33:46.683163 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-18 01:33:46.686146 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-18 01:33:46.693057 | orchestrator | /opt/configuration ~ 2026-03-18 01:33:46.693150 | orchestrator | + set -e 2026-03-18 01:33:46.693169 | orchestrator | + pushd /opt/configuration 2026-03-18 01:33:46.693185 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 01:33:46.694536 | orchestrator | + source /opt/venv/bin/activate 2026-03-18 01:33:46.695616 | orchestrator | ++ deactivate nondestructive 2026-03-18 01:33:46.695660 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:46.695676 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:46.695707 | orchestrator | ++ hash -r 2026-03-18 01:33:46.695716 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:46.695723 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-18 01:33:46.695730 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-18 01:33:46.695737 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-18 01:33:46.695745 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-18 01:33:46.695752 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-18 01:33:46.695759 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-18 01:33:46.695767 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-18 01:33:46.695775 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:33:46.695783 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:33:46.695791 | orchestrator | ++ export PATH 2026-03-18 01:33:46.695799 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:46.695814 | orchestrator | ++ '[' -z '' ']' 2026-03-18 01:33:46.695822 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-18 01:33:46.695830 | orchestrator | ++ PS1='(venv) ' 2026-03-18 01:33:46.695837 | orchestrator | ++ export PS1 2026-03-18 01:33:46.695845 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-18 01:33:46.695852 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-18 01:33:46.695861 | orchestrator | ++ hash -r 2026-03-18 01:33:46.695866 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-18 01:33:48.041984 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-18 01:33:48.043434 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-18 01:33:48.052150 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-18 01:33:48.052258 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-18 01:33:48.052270 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-18 01:33:48.058271 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-18 01:33:48.059823 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-18 01:33:48.061112 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-18 01:33:48.062511 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-18 01:33:48.105309 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-18 01:33:48.106985 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-18 01:33:48.109189 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-18 01:33:48.111431 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-18 01:33:48.115341 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-18 01:33:48.382807 | orchestrator | ++ which gilt 2026-03-18 01:33:48.386641 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-18 01:33:48.386709 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-18 01:33:48.674508 | orchestrator | osism.cfg-generics: 2026-03-18 01:33:48.820521 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-18 01:33:48.820791 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-18 01:33:48.821266 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-18 01:33:48.821312 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-18 01:33:49.780555 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-18 01:33:49.790304 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-18 01:33:50.296319 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-18 01:33:50.357786 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 01:33:50.357890 | orchestrator | + deactivate 2026-03-18 01:33:50.357904 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-18 01:33:50.357916 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:33:50.357925 | orchestrator | + export PATH 2026-03-18 01:33:50.357934 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-18 01:33:50.357943 | orchestrator | + '[' -n '' ']' 2026-03-18 01:33:50.357954 | orchestrator | + hash -r 2026-03-18 01:33:50.357963 | orchestrator | + '[' -n '' ']' 2026-03-18 01:33:50.357983 | orchestrator | + unset VIRTUAL_ENV 2026-03-18 01:33:50.357992 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-18 01:33:50.358000 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-18 01:33:50.358009 | orchestrator | + unset -f deactivate 2026-03-18 01:33:50.358068 | orchestrator | + popd 2026-03-18 01:33:50.358078 | orchestrator | ~ 2026-03-18 01:33:50.359621 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-18 01:33:50.359686 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-18 01:33:50.360133 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-18 01:33:50.410890 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 01:33:50.410976 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-18 01:33:50.411789 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-18 01:33:50.466331 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 01:33:50.466728 | orchestrator | ++ semver 2024.2 2025.1 2026-03-18 01:33:50.524347 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 01:33:50.524449 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-18 01:33:50.607107 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 01:33:50.607285 | orchestrator | + source /opt/venv/bin/activate 2026-03-18 01:33:50.607318 | orchestrator | ++ deactivate nondestructive 2026-03-18 01:33:50.607331 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:50.607340 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:50.607350 | orchestrator | ++ hash -r 2026-03-18 01:33:50.607360 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:50.607370 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-18 01:33:50.607379 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-18 01:33:50.607389 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-18 01:33:50.607699 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-18 01:33:50.607797 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-18 01:33:50.607811 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-18 01:33:50.607821 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-18 01:33:50.607831 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:33:50.607863 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:33:50.607874 | orchestrator | ++ export PATH 2026-03-18 01:33:50.607892 | orchestrator | ++ '[' -n '' ']' 2026-03-18 01:33:50.607959 | orchestrator | ++ '[' -z '' ']' 2026-03-18 01:33:50.607971 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-18 01:33:50.607982 | orchestrator | ++ PS1='(venv) ' 2026-03-18 01:33:50.607999 | orchestrator | ++ export PS1 2026-03-18 01:33:50.608016 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-18 01:33:50.608028 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-18 01:33:50.608576 | orchestrator | ++ hash -r 2026-03-18 01:33:50.608607 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-18 01:33:52.005735 | orchestrator | 2026-03-18 01:33:52.005843 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-18 01:33:52.005858 | orchestrator | 2026-03-18 01:33:52.005870 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-18 01:33:52.655313 | orchestrator | ok: [testbed-manager] 2026-03-18 01:33:52.655424 | orchestrator | 2026-03-18 01:33:52.655437 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-18 01:33:53.743881 | orchestrator | changed: [testbed-manager] 2026-03-18 01:33:53.744112 | orchestrator | 2026-03-18 01:33:53.744131 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-18 01:33:53.744173 | orchestrator | 2026-03-18 01:33:53.744184 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:33:56.337550 | orchestrator | ok: [testbed-manager] 2026-03-18 01:33:56.337658 | orchestrator | 2026-03-18 01:33:56.337677 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-18 01:33:56.389681 | orchestrator | ok: [testbed-manager] 2026-03-18 01:33:56.389809 | orchestrator | 2026-03-18 01:33:56.389835 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-18 01:33:56.909733 | orchestrator | changed: [testbed-manager] 2026-03-18 01:33:56.909834 | orchestrator | 2026-03-18 01:33:56.909855 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-18 01:33:56.954854 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:33:56.954948 | orchestrator | 2026-03-18 01:33:56.954961 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-18 01:33:57.314723 | orchestrator | changed: [testbed-manager] 2026-03-18 01:33:57.314811 | orchestrator | 2026-03-18 01:33:57.314828 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-18 01:33:57.679626 | orchestrator | ok: [testbed-manager] 2026-03-18 01:33:57.679724 | orchestrator | 2026-03-18 01:33:57.679740 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-18 01:33:57.805526 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:33:57.805621 | orchestrator | 2026-03-18 01:33:57.805635 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-18 01:33:57.805647 | orchestrator | 2026-03-18 01:33:57.805657 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:33:59.710708 | orchestrator | ok: [testbed-manager] 2026-03-18 01:33:59.710799 | orchestrator | 2026-03-18 01:33:59.710811 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-18 01:33:59.825610 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-18 01:33:59.825722 | orchestrator | 2026-03-18 01:33:59.825748 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-18 01:33:59.884511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-18 01:33:59.884611 | orchestrator | 2026-03-18 01:33:59.884626 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-18 01:34:01.055367 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-18 01:34:01.055497 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-18 01:34:01.055524 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-18 01:34:01.055544 | orchestrator | 2026-03-18 01:34:01.055568 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-18 01:34:02.974388 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-18 01:34:02.974488 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-18 01:34:02.974500 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-18 01:34:02.974511 | orchestrator | 2026-03-18 01:34:02.974524 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-18 01:34:03.690845 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:34:03.690954 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:03.690969 | orchestrator | 2026-03-18 01:34:03.690982 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-18 01:34:04.403272 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:34:04.403400 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:04.403427 | orchestrator | 2026-03-18 01:34:04.403445 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-18 01:34:04.465732 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:04.465805 | orchestrator | 2026-03-18 01:34:04.465812 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-18 01:34:04.885512 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:04.885618 | orchestrator | 2026-03-18 01:34:04.885630 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-18 01:34:04.982827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-18 01:34:04.982923 | orchestrator | 2026-03-18 01:34:04.982932 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-18 01:34:06.221913 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:06.222096 | orchestrator | 2026-03-18 01:34:06.222137 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-18 01:34:07.129024 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:07.129142 | orchestrator | 2026-03-18 01:34:07.129158 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-18 01:34:16.912577 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:16.912699 | orchestrator | 2026-03-18 01:34:16.912717 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-18 01:34:16.973561 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:16.973667 | orchestrator | 2026-03-18 01:34:16.973709 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-18 01:34:16.973724 | orchestrator | 2026-03-18 01:34:16.973738 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:34:19.039281 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:19.039362 | orchestrator | 2026-03-18 01:34:19.039371 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-18 01:34:19.173983 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-18 01:34:19.174152 | orchestrator | 2026-03-18 01:34:19.174168 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-18 01:34:19.249744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 01:34:19.249835 | orchestrator | 2026-03-18 01:34:19.249849 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-18 01:34:21.981928 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:21.982116 | orchestrator | 2026-03-18 01:34:21.982159 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-18 01:34:22.046483 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:22.046572 | orchestrator | 2026-03-18 01:34:22.046586 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-18 01:34:22.190474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-18 01:34:22.190589 | orchestrator | 2026-03-18 01:34:22.190606 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-18 01:34:25.261709 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-18 01:34:25.261832 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-18 01:34:25.261855 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-18 01:34:25.261872 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-18 01:34:25.261889 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-18 01:34:25.261905 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-18 01:34:25.261922 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-18 01:34:25.261940 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-18 01:34:25.261956 | orchestrator | 2026-03-18 01:34:25.261973 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-18 01:34:25.969275 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:25.969379 | orchestrator | 2026-03-18 01:34:25.969397 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-18 01:34:26.666772 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:26.666876 | orchestrator | 2026-03-18 01:34:26.666886 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-18 01:34:26.764550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-18 01:34:26.764652 | orchestrator | 2026-03-18 01:34:26.764669 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-18 01:34:28.086374 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-18 01:34:28.086499 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-18 01:34:28.086517 | orchestrator | 2026-03-18 01:34:28.086530 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-18 01:34:28.774301 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:28.774396 | orchestrator | 2026-03-18 01:34:28.774408 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-18 01:34:28.829493 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:28.829585 | orchestrator | 2026-03-18 01:34:28.829608 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-18 01:34:28.910672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-18 01:34:28.910835 | orchestrator | 2026-03-18 01:34:28.910860 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-18 01:34:29.637731 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:29.637861 | orchestrator | 2026-03-18 01:34:29.637878 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-18 01:34:29.713895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-18 01:34:29.713995 | orchestrator | 2026-03-18 01:34:29.714011 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-18 01:34:31.207100 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:34:31.207201 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:34:31.207272 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:31.207290 | orchestrator | 2026-03-18 01:34:31.207302 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-18 01:34:31.897512 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:31.897594 | orchestrator | 2026-03-18 01:34:31.897605 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-18 01:34:31.954679 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:31.954767 | orchestrator | 2026-03-18 01:34:31.954780 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-18 01:34:32.055038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-18 01:34:32.055068 | orchestrator | 2026-03-18 01:34:32.055075 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-18 01:34:32.618346 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:32.618473 | orchestrator | 2026-03-18 01:34:32.618494 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-18 01:34:33.064083 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:33.064194 | orchestrator | 2026-03-18 01:34:33.064211 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-18 01:34:34.442008 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-18 01:34:34.442216 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-18 01:34:34.442316 | orchestrator | 2026-03-18 01:34:34.442338 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-18 01:34:35.176493 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:35.176591 | orchestrator | 2026-03-18 01:34:35.176607 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-18 01:34:35.602318 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:35.602393 | orchestrator | 2026-03-18 01:34:35.602400 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-18 01:34:36.041874 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:36.041964 | orchestrator | 2026-03-18 01:34:36.041975 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-18 01:34:36.087956 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:36.088042 | orchestrator | 2026-03-18 01:34:36.088054 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-18 01:34:36.169140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-18 01:34:36.169362 | orchestrator | 2026-03-18 01:34:36.169386 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-18 01:34:36.227857 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:36.227995 | orchestrator | 2026-03-18 01:34:36.228018 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-18 01:34:38.433427 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-18 01:34:38.433538 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-18 01:34:38.433555 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-18 01:34:38.433568 | orchestrator | 2026-03-18 01:34:38.433580 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-18 01:34:39.225896 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:39.225989 | orchestrator | 2026-03-18 01:34:39.226001 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-18 01:34:39.997631 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:39.997755 | orchestrator | 2026-03-18 01:34:39.997780 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-18 01:34:40.782222 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:40.782348 | orchestrator | 2026-03-18 01:34:40.782366 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-18 01:34:40.858726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-18 01:34:40.858814 | orchestrator | 2026-03-18 01:34:40.858825 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-18 01:34:40.916329 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:40.916418 | orchestrator | 2026-03-18 01:34:40.916432 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-18 01:34:41.703594 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-18 01:34:41.703729 | orchestrator | 2026-03-18 01:34:41.703759 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-18 01:34:41.798655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-18 01:34:41.798760 | orchestrator | 2026-03-18 01:34:41.798777 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-18 01:34:42.575530 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:42.575637 | orchestrator | 2026-03-18 01:34:42.575662 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-18 01:34:43.262840 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:43.263635 | orchestrator | 2026-03-18 01:34:43.263658 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-18 01:34:43.325656 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:34:43.325759 | orchestrator | 2026-03-18 01:34:43.325776 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-18 01:34:43.400144 | orchestrator | ok: [testbed-manager] 2026-03-18 01:34:43.400296 | orchestrator | 2026-03-18 01:34:43.400325 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-18 01:34:44.279826 | orchestrator | changed: [testbed-manager] 2026-03-18 01:34:44.279921 | orchestrator | 2026-03-18 01:34:44.279934 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-18 01:36:00.034502 | orchestrator | changed: [testbed-manager] 2026-03-18 01:36:00.034616 | orchestrator | 2026-03-18 01:36:00.034633 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-18 01:36:01.117316 | orchestrator | ok: [testbed-manager] 2026-03-18 01:36:01.117397 | orchestrator | 2026-03-18 01:36:01.117406 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-18 01:36:01.180481 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:36:01.180575 | orchestrator | 2026-03-18 01:36:01.180595 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-18 01:36:03.801714 | orchestrator | changed: [testbed-manager] 2026-03-18 01:36:03.801816 | orchestrator | 2026-03-18 01:36:03.801831 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-18 01:36:03.857144 | orchestrator | ok: [testbed-manager] 2026-03-18 01:36:03.857242 | orchestrator | 2026-03-18 01:36:03.857258 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-18 01:36:03.857271 | orchestrator | 2026-03-18 01:36:03.857327 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-18 01:36:04.076141 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:36:04.076233 | orchestrator | 2026-03-18 01:36:04.076243 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-18 01:37:04.137753 | orchestrator | Pausing for 60 seconds 2026-03-18 01:37:04.137900 | orchestrator | changed: [testbed-manager] 2026-03-18 01:37:04.137914 | orchestrator | 2026-03-18 01:37:04.137926 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-18 01:37:07.266895 | orchestrator | changed: [testbed-manager] 2026-03-18 01:37:07.266997 | orchestrator | 2026-03-18 01:37:07.267022 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-18 01:38:09.484816 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-18 01:38:09.484938 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-18 01:38:09.485011 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-18 01:38:09.485035 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:09.485057 | orchestrator | 2026-03-18 01:38:09.485077 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-18 01:38:20.954265 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:20.954394 | orchestrator | 2026-03-18 01:38:20.954406 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-18 01:38:21.043943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-18 01:38:21.044025 | orchestrator | 2026-03-18 01:38:21.044032 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-18 01:38:21.044037 | orchestrator | 2026-03-18 01:38:21.044042 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-18 01:38:21.085597 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:38:21.085671 | orchestrator | 2026-03-18 01:38:21.085684 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-18 01:38:21.159339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-18 01:38:21.159501 | orchestrator | 2026-03-18 01:38:21.159524 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-18 01:38:21.990326 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:21.990528 | orchestrator | 2026-03-18 01:38:21.990548 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-18 01:38:25.598533 | orchestrator | ok: [testbed-manager] 2026-03-18 01:38:25.598663 | orchestrator | 2026-03-18 01:38:25.598688 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-18 01:38:25.684141 | orchestrator | ok: [testbed-manager] => { 2026-03-18 01:38:25.684236 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-18 01:38:25.684252 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-18 01:38:25.684265 | orchestrator | "Checking running containers against expected versions...", 2026-03-18 01:38:25.684277 | orchestrator | "", 2026-03-18 01:38:25.684289 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-18 01:38:25.684301 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-18 01:38:25.684313 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684324 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-18 01:38:25.684335 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684346 | orchestrator | "", 2026-03-18 01:38:25.684414 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-18 01:38:25.684452 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-18 01:38:25.684464 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684475 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-18 01:38:25.684486 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684497 | orchestrator | "", 2026-03-18 01:38:25.684508 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-18 01:38:25.684519 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-18 01:38:25.684530 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684541 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-18 01:38:25.684551 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684562 | orchestrator | "", 2026-03-18 01:38:25.684573 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-18 01:38:25.684585 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-18 01:38:25.684596 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684609 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-18 01:38:25.684622 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684634 | orchestrator | "", 2026-03-18 01:38:25.684650 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-18 01:38:25.684662 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-18 01:38:25.684675 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684687 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-18 01:38:25.684700 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684712 | orchestrator | "", 2026-03-18 01:38:25.684724 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-18 01:38:25.684737 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.684750 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684762 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.684775 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684787 | orchestrator | "", 2026-03-18 01:38:25.684800 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-18 01:38:25.684812 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-18 01:38:25.684824 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684837 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-18 01:38:25.684850 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684862 | orchestrator | "", 2026-03-18 01:38:25.684875 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-18 01:38:25.684887 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-18 01:38:25.684900 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684912 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-18 01:38:25.684925 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.684937 | orchestrator | "", 2026-03-18 01:38:25.684949 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-18 01:38:25.684960 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-18 01:38:25.684970 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.684981 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-18 01:38:25.684992 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685002 | orchestrator | "", 2026-03-18 01:38:25.685013 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-18 01:38:25.685024 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-18 01:38:25.685035 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685045 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-18 01:38:25.685056 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685067 | orchestrator | "", 2026-03-18 01:38:25.685077 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-18 01:38:25.685096 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685107 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685118 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685129 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685139 | orchestrator | "", 2026-03-18 01:38:25.685150 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-18 01:38:25.685161 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685172 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685182 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685193 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685205 | orchestrator | "", 2026-03-18 01:38:25.685215 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-18 01:38:25.685226 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685237 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685248 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685259 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685270 | orchestrator | "", 2026-03-18 01:38:25.685280 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-18 01:38:25.685291 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685302 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685313 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685341 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685386 | orchestrator | "", 2026-03-18 01:38:25.685406 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-18 01:38:25.685424 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685456 | orchestrator | " Enabled: true", 2026-03-18 01:38:25.685475 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-18 01:38:25.685487 | orchestrator | " Status: ✅ MATCH", 2026-03-18 01:38:25.685498 | orchestrator | "", 2026-03-18 01:38:25.685508 | orchestrator | "=== Summary ===", 2026-03-18 01:38:25.685519 | orchestrator | "Errors (version mismatches): 0", 2026-03-18 01:38:25.685530 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-18 01:38:25.685541 | orchestrator | "", 2026-03-18 01:38:25.685551 | orchestrator | "✅ All running containers match expected versions!" 2026-03-18 01:38:25.685562 | orchestrator | ] 2026-03-18 01:38:25.685573 | orchestrator | } 2026-03-18 01:38:25.685584 | orchestrator | 2026-03-18 01:38:25.685595 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-18 01:38:25.743422 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:38:25.743508 | orchestrator | 2026-03-18 01:38:25.743520 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:38:25.743529 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-18 01:38:25.743536 | orchestrator | 2026-03-18 01:38:25.855505 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 01:38:25.855608 | orchestrator | + deactivate 2026-03-18 01:38:25.855627 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-18 01:38:25.855644 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 01:38:25.855658 | orchestrator | + export PATH 2026-03-18 01:38:25.855671 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-18 01:38:25.855685 | orchestrator | + '[' -n '' ']' 2026-03-18 01:38:25.855698 | orchestrator | + hash -r 2026-03-18 01:38:25.855712 | orchestrator | + '[' -n '' ']' 2026-03-18 01:38:25.855727 | orchestrator | + unset VIRTUAL_ENV 2026-03-18 01:38:25.855740 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-18 01:38:25.855753 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-18 01:38:25.855767 | orchestrator | + unset -f deactivate 2026-03-18 01:38:25.855781 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-18 01:38:25.864641 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 01:38:25.864708 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-18 01:38:25.864747 | orchestrator | + local max_attempts=60 2026-03-18 01:38:25.864758 | orchestrator | + local name=ceph-ansible 2026-03-18 01:38:25.864769 | orchestrator | + local attempt_num=1 2026-03-18 01:38:25.865615 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:38:25.903787 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:38:25.903856 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-18 01:38:25.903864 | orchestrator | + local max_attempts=60 2026-03-18 01:38:25.903872 | orchestrator | + local name=kolla-ansible 2026-03-18 01:38:25.903878 | orchestrator | + local attempt_num=1 2026-03-18 01:38:25.903987 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-18 01:38:25.943961 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:38:25.944052 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-18 01:38:25.944067 | orchestrator | + local max_attempts=60 2026-03-18 01:38:25.944078 | orchestrator | + local name=osism-ansible 2026-03-18 01:38:25.944088 | orchestrator | + local attempt_num=1 2026-03-18 01:38:25.944489 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-18 01:38:25.971105 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:38:25.971191 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-18 01:38:25.971205 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-18 01:38:26.747443 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-18 01:38:26.945031 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-18 01:38:26.945104 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-18 01:38:26.945113 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-18 01:38:26.945119 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-18 01:38:26.945126 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-18 01:38:26.945152 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-18 01:38:26.945160 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-18 01:38:26.945172 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-18 01:38:26.945183 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-18 01:38:26.945191 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-18 01:38:26.945199 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-18 01:38:26.945207 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-18 01:38:26.945216 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-18 01:38:26.945246 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-18 01:38:26.945255 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-18 01:38:26.945264 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-18 01:38:26.953257 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-18 01:38:27.006217 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 01:38:27.006312 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-18 01:38:27.012350 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-18 01:38:39.394585 | orchestrator | 2026-03-18 01:38:39 | INFO  | Task 1b4d53db-a918-4014-ac97-6ac3fcd0838d (resolvconf) was prepared for execution. 2026-03-18 01:38:39.394689 | orchestrator | 2026-03-18 01:38:39 | INFO  | It takes a moment until task 1b4d53db-a918-4014-ac97-6ac3fcd0838d (resolvconf) has been started and output is visible here. 2026-03-18 01:38:54.362624 | orchestrator | 2026-03-18 01:38:54.362738 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-18 01:38:54.362754 | orchestrator | 2026-03-18 01:38:54.362765 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:38:54.362777 | orchestrator | Wednesday 18 March 2026 01:38:43 +0000 (0:00:00.152) 0:00:00.152 ******* 2026-03-18 01:38:54.362788 | orchestrator | ok: [testbed-manager] 2026-03-18 01:38:54.362800 | orchestrator | 2026-03-18 01:38:54.362811 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-18 01:38:54.362823 | orchestrator | Wednesday 18 March 2026 01:38:47 +0000 (0:00:03.992) 0:00:04.145 ******* 2026-03-18 01:38:54.362833 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:38:54.362845 | orchestrator | 2026-03-18 01:38:54.362856 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-18 01:38:54.362866 | orchestrator | Wednesday 18 March 2026 01:38:47 +0000 (0:00:00.065) 0:00:04.210 ******* 2026-03-18 01:38:54.362877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-18 01:38:54.362889 | orchestrator | 2026-03-18 01:38:54.362900 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-18 01:38:54.362911 | orchestrator | Wednesday 18 March 2026 01:38:48 +0000 (0:00:00.089) 0:00:04.300 ******* 2026-03-18 01:38:54.362941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 01:38:54.362952 | orchestrator | 2026-03-18 01:38:54.362963 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-18 01:38:54.362974 | orchestrator | Wednesday 18 March 2026 01:38:48 +0000 (0:00:00.089) 0:00:04.390 ******* 2026-03-18 01:38:54.362984 | orchestrator | ok: [testbed-manager] 2026-03-18 01:38:54.362995 | orchestrator | 2026-03-18 01:38:54.363006 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-18 01:38:54.363016 | orchestrator | Wednesday 18 March 2026 01:38:49 +0000 (0:00:01.243) 0:00:05.633 ******* 2026-03-18 01:38:54.363027 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:38:54.363037 | orchestrator | 2026-03-18 01:38:54.363048 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-18 01:38:54.363058 | orchestrator | Wednesday 18 March 2026 01:38:49 +0000 (0:00:00.063) 0:00:05.696 ******* 2026-03-18 01:38:54.363089 | orchestrator | ok: [testbed-manager] 2026-03-18 01:38:54.363100 | orchestrator | 2026-03-18 01:38:54.363111 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-18 01:38:54.363122 | orchestrator | Wednesday 18 March 2026 01:38:49 +0000 (0:00:00.548) 0:00:06.245 ******* 2026-03-18 01:38:54.363132 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:38:54.363143 | orchestrator | 2026-03-18 01:38:54.363153 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-18 01:38:54.363167 | orchestrator | Wednesday 18 March 2026 01:38:50 +0000 (0:00:00.079) 0:00:06.324 ******* 2026-03-18 01:38:54.363179 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:54.363191 | orchestrator | 2026-03-18 01:38:54.363204 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-18 01:38:54.363216 | orchestrator | Wednesday 18 March 2026 01:38:50 +0000 (0:00:00.593) 0:00:06.917 ******* 2026-03-18 01:38:54.363228 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:54.363241 | orchestrator | 2026-03-18 01:38:54.363253 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-18 01:38:54.363265 | orchestrator | Wednesday 18 March 2026 01:38:51 +0000 (0:00:01.175) 0:00:08.093 ******* 2026-03-18 01:38:54.363278 | orchestrator | ok: [testbed-manager] 2026-03-18 01:38:54.363291 | orchestrator | 2026-03-18 01:38:54.363303 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-18 01:38:54.363316 | orchestrator | Wednesday 18 March 2026 01:38:52 +0000 (0:00:01.012) 0:00:09.106 ******* 2026-03-18 01:38:54.363328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-18 01:38:54.363341 | orchestrator | 2026-03-18 01:38:54.363353 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-18 01:38:54.363390 | orchestrator | Wednesday 18 March 2026 01:38:52 +0000 (0:00:00.085) 0:00:09.192 ******* 2026-03-18 01:38:54.363401 | orchestrator | changed: [testbed-manager] 2026-03-18 01:38:54.363412 | orchestrator | 2026-03-18 01:38:54.363423 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:38:54.363435 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 01:38:54.363445 | orchestrator | 2026-03-18 01:38:54.363456 | orchestrator | 2026-03-18 01:38:54.363466 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:38:54.363477 | orchestrator | Wednesday 18 March 2026 01:38:54 +0000 (0:00:01.194) 0:00:10.387 ******* 2026-03-18 01:38:54.363487 | orchestrator | =============================================================================== 2026-03-18 01:38:54.363498 | orchestrator | Gathering Facts --------------------------------------------------------- 3.99s 2026-03-18 01:38:54.363508 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.24s 2026-03-18 01:38:54.363519 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-03-18 01:38:54.363529 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-03-18 01:38:54.363539 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.01s 2026-03-18 01:38:54.363550 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2026-03-18 01:38:54.363578 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-03-18 01:38:54.363590 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-18 01:38:54.363601 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-18 01:38:54.363611 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-18 01:38:54.363622 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-18 01:38:54.363632 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-18 01:38:54.363651 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-18 01:38:54.724659 | orchestrator | + osism apply sshconfig 2026-03-18 01:39:06.950559 | orchestrator | 2026-03-18 01:39:06 | INFO  | Task 54b2d896-7f99-4971-94ac-8b4745e846e2 (sshconfig) was prepared for execution. 2026-03-18 01:39:06.950659 | orchestrator | 2026-03-18 01:39:06 | INFO  | It takes a moment until task 54b2d896-7f99-4971-94ac-8b4745e846e2 (sshconfig) has been started and output is visible here. 2026-03-18 01:39:19.630633 | orchestrator | 2026-03-18 01:39:19.630716 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-18 01:39:19.630722 | orchestrator | 2026-03-18 01:39:19.630747 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-18 01:39:19.630754 | orchestrator | Wednesday 18 March 2026 01:39:11 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-03-18 01:39:19.630761 | orchestrator | ok: [testbed-manager] 2026-03-18 01:39:19.630768 | orchestrator | 2026-03-18 01:39:19.630774 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-18 01:39:19.630780 | orchestrator | Wednesday 18 March 2026 01:39:11 +0000 (0:00:00.572) 0:00:00.747 ******* 2026-03-18 01:39:19.630786 | orchestrator | changed: [testbed-manager] 2026-03-18 01:39:19.630793 | orchestrator | 2026-03-18 01:39:19.630799 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-18 01:39:19.630805 | orchestrator | Wednesday 18 March 2026 01:39:12 +0000 (0:00:00.585) 0:00:01.333 ******* 2026-03-18 01:39:19.630811 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-18 01:39:19.630819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-18 01:39:19.630825 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-18 01:39:19.630831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-18 01:39:19.630838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-18 01:39:19.630845 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-18 01:39:19.630851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-18 01:39:19.630856 | orchestrator | 2026-03-18 01:39:19.630862 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-18 01:39:19.630869 | orchestrator | Wednesday 18 March 2026 01:39:18 +0000 (0:00:06.129) 0:00:07.462 ******* 2026-03-18 01:39:19.630876 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:39:19.630883 | orchestrator | 2026-03-18 01:39:19.630889 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-18 01:39:19.630894 | orchestrator | Wednesday 18 March 2026 01:39:18 +0000 (0:00:00.089) 0:00:07.552 ******* 2026-03-18 01:39:19.630898 | orchestrator | changed: [testbed-manager] 2026-03-18 01:39:19.630902 | orchestrator | 2026-03-18 01:39:19.630906 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:39:19.630911 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:39:19.630916 | orchestrator | 2026-03-18 01:39:19.630920 | orchestrator | 2026-03-18 01:39:19.630924 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:39:19.630928 | orchestrator | Wednesday 18 March 2026 01:39:19 +0000 (0:00:00.644) 0:00:08.196 ******* 2026-03-18 01:39:19.630931 | orchestrator | =============================================================================== 2026-03-18 01:39:19.630935 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.13s 2026-03-18 01:39:19.630939 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.64s 2026-03-18 01:39:19.630943 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.59s 2026-03-18 01:39:19.630947 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-03-18 01:39:19.630951 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-03-18 01:39:19.984502 | orchestrator | + osism apply known-hosts 2026-03-18 01:39:32.191656 | orchestrator | 2026-03-18 01:39:32 | INFO  | Task 0fbe3c5e-8bab-4afb-872d-17d77d4800cd (known-hosts) was prepared for execution. 2026-03-18 01:39:32.191800 | orchestrator | 2026-03-18 01:39:32 | INFO  | It takes a moment until task 0fbe3c5e-8bab-4afb-872d-17d77d4800cd (known-hosts) has been started and output is visible here. 2026-03-18 01:39:50.069675 | orchestrator | 2026-03-18 01:39:50.069757 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-18 01:39:50.069766 | orchestrator | 2026-03-18 01:39:50.069771 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-18 01:39:50.069777 | orchestrator | Wednesday 18 March 2026 01:39:36 +0000 (0:00:00.198) 0:00:00.198 ******* 2026-03-18 01:39:50.069782 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-18 01:39:50.069788 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-18 01:39:50.069793 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-18 01:39:50.069798 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-18 01:39:50.069802 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-18 01:39:50.069807 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-18 01:39:50.069812 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-18 01:39:50.069816 | orchestrator | 2026-03-18 01:39:50.069821 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-18 01:39:50.069827 | orchestrator | Wednesday 18 March 2026 01:39:42 +0000 (0:00:06.239) 0:00:06.438 ******* 2026-03-18 01:39:50.069832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-18 01:39:50.069839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-18 01:39:50.069843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-18 01:39:50.069848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-18 01:39:50.069852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-18 01:39:50.069864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-18 01:39:50.069869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-18 01:39:50.069873 | orchestrator | 2026-03-18 01:39:50.069878 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.069883 | orchestrator | Wednesday 18 March 2026 01:39:43 +0000 (0:00:00.184) 0:00:06.622 ******* 2026-03-18 01:39:50.069887 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH/0YxzzhkENcJTgrL0t3TEi1pgoLjufH/aGonZBEVzC) 2026-03-18 01:39:50.069899 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzNvgtg0WwTxv1cPfED0GgJ9gwjdRmZdDqcfDMNmY7Bw/4oYlYF/fkZewz8mdKY6/bX0mwq5wZJe15ZhCDHppSv9Ob/nSdBJd+IpRC1Md0G4uij3+MlXuVanUw8N7Sh8IST51iYIFbz1qjlWt1C/NPcLbQ9iqJEe9OisLQB/Eq/6m6oLHNB9eJF1eb3AKyRSkb8kl4E8SE1yp966lxvslxkbivl843xthY0UthvhpMjujyojMSr+iPbOFxgfUvLi//YTrLlglDfkmdMfOAZ7sNcZHySAPXLEYmGhO3lzDNr+ejl7hDVD7x07viCUqhXv0liodbjBwRdzsspqt9L6aSE/Ijn/XdUWYc6Ht3psuyTAyJnVX247wegiXxSTGANKXHpZhuPhHh/PJk81V3eg/3aHv0cxCb8V4ZoMIM0aUHsaTACsE2z7lYG7mdTTCeQfO5mnZTNAUnocuE5bHrraeYJZPOTjfaoEGUVUELvMR/aoi40qsr5WUeJQBotPQS7xs=) 2026-03-18 01:39:50.069922 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDklAd6QyP2hL2YOAL4foJ0JWgg/waXP9YT8vyfAe4QxtJhhKQUXCxKtUNeqSfZlJCeG43v78+lt9b7uF4Omm4=) 2026-03-18 01:39:50.069929 | orchestrator | 2026-03-18 01:39:50.069933 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.069938 | orchestrator | Wednesday 18 March 2026 01:39:44 +0000 (0:00:01.276) 0:00:07.899 ******* 2026-03-18 01:39:50.069943 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGrF53WUarW4gLLdZxTlchj05XaSCJ1mSpt+Q/UDeIBvsBTBIZy1ywnUGwbGE26ClFgQILMqQynyJmIPy23/VA=) 2026-03-18 01:39:50.069947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFHY5/UZN1kzOWWEIMpUl/mOf9nO+OYzYyJQW3fqXNW/) 2026-03-18 01:39:50.069968 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7xtMi3sJ+hqW2aWar2TDMdWkNqaq1kC8RqzSfI8ykl17JhP5KV3rZlg3Vbo3c3s6xOILlHHQyX33HuWz9I37xDkFD5blpvAP6yVrnpnhRwUyPTLSsVlqpacF8Y2Ydr0IScyUJ/GdZvYeKrHHvwuIdjUIdSh0VSbq17DBBNvhQgktyu+8ZkD6BsSKLRt8te7rIdz462XKprSsbTthPMhs4NYAvH52phFshD5oX6cLxJLcZiZ4Ii0aizk20OHhuyu/xdGBVPj76bnNAGiFnzrJko6kgCBNJ1V1iaC3IFNmVJN6lTE6QdLsnYKmyJ9CPwJvpBHAUc5lHmYf28IU95ZmyqGFhmjX/yHgM7NPdABp+nro4C5QiAesPeAImcmMUw5+EUYe+TbDPjKAcFY/jysiycTxR9J9EdHnQrQTGa71iS9iIWnsT3fcjcU8wuMdfyChKT/uTQwAnjsPb3Fs+6WsQlJaYq1ztbnpg019dNzbWm3hc3wWvriaAq/KFuh1b2KM=) 2026-03-18 01:39:50.069974 | orchestrator | 2026-03-18 01:39:50.069978 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.069983 | orchestrator | Wednesday 18 March 2026 01:39:45 +0000 (0:00:01.148) 0:00:09.047 ******* 2026-03-18 01:39:50.069987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDLUpU0aARxWI9YJcPnX6gmefrDcsf4a0dNZcfuNE3M1) 2026-03-18 01:39:50.069992 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH5UUkrQNbhKHTzXB1uTOZZjgCBjkRsxgYDV/SH2SOCZg/v4jGkBXzTSJEwm9B8RU4uPU/D27IiONVIQlkW8wOayUSATn1SrqGOntUkbrCz6Q5Fmm6QMthTiOHOgqf2nuXX/xPetD/BWVTZRo5rOcUa6JvbHTz6j0pni/KO5UCR5O4yeZJSKmPB3E4yfu9M3bvwe2+k9Rc4Dp4deVAgy/FJsdyNKr/ZeDmea9ZRA/fjMNlHllbjHYJKAaPDlDOrbSgGVHizD8HYxkIwjkKUEqkFJ4nfjFW/1jxpyaS57xD4qi88l4CJcKI/IaWeQts7ThKs9i5vjg1mSlZ3bn3wffopHUFufvxvVOZAToyV/PJiiKPLvPU1uMWelFmKsPelyZa1gMbpK+kECq/tccZwju6iZQ9qbLkHbnZ2dWiXAf00R3zlQBobqTpWINkR3vUG49PgL0WtVWVicI3c/HwFaiAAND5y1hp3ifJjEhKG/RbSfJiOuyfNyK0UXGtZs4xX+M=) 2026-03-18 01:39:50.069998 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxHG4D3YEXQXVzzy3XtmvcllXKvaWy/vfaXNme8mIjNzMq93XObo3GQlTBl+f6BrOU8dgNlUQVV5Kib/j40NHs=) 2026-03-18 01:39:50.070002 | orchestrator | 2026-03-18 01:39:50.070007 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.070011 | orchestrator | Wednesday 18 March 2026 01:39:46 +0000 (0:00:01.159) 0:00:10.207 ******* 2026-03-18 01:39:50.070016 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlOvyBt3blibPw99xznQ1msbK3Iyspu3JDMy6Fq9oEyZ1PK9gJIRnAwLJS4GGJ6JTtNAWkLdzmarTSqIUOLcD0=) 2026-03-18 01:39:50.070021 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINP8TLhBpSq0gpvQd9yChz7O+glafV/v71VWdizPKMrF) 2026-03-18 01:39:50.070026 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVd38XxD3i4gg35v3HvNOipzn1+FvznYe/jSxyFtpEYhx2w05REWXKjKqK0stEx9TQTCTIV3dHI2ETNYemhKVqZ0SUGDwZUkFOlK3q6EZdn9XmdDlclJcZ3lNWTWYBp2xyVZPMC93+HpNqaZI8LPggij+eYQ5FuqhRhb7amQT0mzpjv4uUBSs0bYy3/PDw+HLyvvXoM8BC1MI210p8trgjMLafxq1EGpW1d/LdUGLX1sLOq1J6mvp+ZGbOhrgM7qcP/0ATp91HDGbKr7ZPsRFEeWcJvY1nV6tAZyfdKhiEqrVVaHzIyDG1Ht6coT1p6NuTnkOc7sjC6s6bYR18U0IibpERt20z250qzSmSxoWAOEHmzWf9vv49V+V3+h9MJGHVzySkl0bFyoy73IFE5o4bsO7FfGYtfydFP0OaRNa44nqxokjm/IUFYdQ+sS7PhHOnz7KE56ncDwS1DJb2F3MyWIEmjHRvVDnn6ZY9eVzcT/MDsHqQJ/DJDe3Y4ztq3Ec=) 2026-03-18 01:39:50.070034 | orchestrator | 2026-03-18 01:39:50.070039 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.070086 | orchestrator | Wednesday 18 March 2026 01:39:47 +0000 (0:00:01.112) 0:00:11.319 ******* 2026-03-18 01:39:50.070137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpWZ+furmZv5y3eM5Jmkobk6gbIL5e796M18+dpNU+OmXRMropaiuJghCIm0NWMPTuNttS2JiWN9bGGFOhMBiQQgsuRCAXDpKJsYjClpbWyBQnoOppf1Wam2RtwK2t6XAiZesNS1skdqAAXPtRYPPBhliWZxXE30/pJVtOlp6GamfzXat5jBzXwyCPe18ilyPfvWzN0o2EE1eCRuO85Y4tNeQmYBg8KJyvY5fraZh4ncU5kvY7jbWm8cF1l0VipZxPtSnrN2W6JDqCjFUCkMikfaB6W4LDMtU2Q2deuuO7R8b7aDr5ZeB1QpR3/gIgdljragN7Vbwv0LAEAoqpj2IdmpFuaEaXwEwEwzTS1oWZ2jb23MB/ZC3/dwckZ01JPo1S0eU5vld3ejLD18TMXFhv4h5jN4qMKHZ7vjYDDRSmt4K3QOdNcmYQZm6LOstUpJLjDcOb8ABkSwawEHwwp4LvryZ7UyPmbYXue5mEza5SKEL9XghmelTLiFHs4IaiqRU=) 2026-03-18 01:39:50.070142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFua3y+n8tSDvjxiHAm/JBpMHUd3oULn8NbYiUMqSoPZ5HcSfGsk4erziwj65JpmeMuLwvG63r0yXGxm7kH1Uyo=) 2026-03-18 01:39:50.070147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINrOJLr+Lrz8uMIxjALLFNb6V+0D5xUU3h29TIud2t2H) 2026-03-18 01:39:50.070151 | orchestrator | 2026-03-18 01:39:50.070156 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:39:50.070161 | orchestrator | Wednesday 18 March 2026 01:39:48 +0000 (0:00:01.139) 0:00:12.458 ******* 2026-03-18 01:39:50.070170 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC28O2ElvAWrJy1eccgQ5lmgR1S0i/6vhx8HdkMq9H0E/vxd+5N3hujEFXsBEfuVwX3fsB9yiYenBIdwQ7sQVNcotn/8BPJI7XZkUoJrMZHw7y4MJzXosXtwMB26Jowc2mmZhxC4COjdYBB01JP1HzH1V/DuDfXhlx0ZeWVMsPp4rEXKf0PBc3lSzE24H/cV1WyScH/oEoApOcBvUQm9BDiCmT8ljmK31q05CSoV72N6v9GjAKb6UdvDWfmoDamJFwcuHFsWwkvt7KOb9y6ShYwTc+L+ASMH9j70gKIB0nCWNCp9Ft9aRokn4ZTNf/d3Y56LGTb2zIF6VUj8hZH/FT34gVmUZeXEUlsUmIHjj1T7NWkz3vMUvJjVHLzxB71NGaYWFAnRHeenux5nD9HnawLxYu31nHYoTiftxnn9Gjl/hdnOwwyAXJW1qvwnnRmHfm7U2klXMB1nk3WqP0XxzSe3JVpSuZb0yx1x9kTzuA9x/McAmSU9aj73SFVFlZOKgc=) 2026-03-18 01:40:01.704229 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOwagIwZTc3phyCqnt+TGyf0S1hlx4f4+Va1XqwIE35PI4svqbaQRtQiZEx0qt/wjSSu7X8wajAa7pDbi27QBY=) 2026-03-18 01:40:01.704349 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP9dLr2V5KI5+H9MnKyO/9jTZrRGPsDAvqsTLglJA9Jd) 2026-03-18 01:40:01.704366 | orchestrator | 2026-03-18 01:40:01.704377 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:01.704389 | orchestrator | Wednesday 18 March 2026 01:39:50 +0000 (0:00:01.099) 0:00:13.558 ******* 2026-03-18 01:40:01.704399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdzQ8xcmxTQi+bpUcr3+OGk8mM+1fdyZEdYakdzIqHiLV8d7OTw8dWkl+6ZTW6JRbDGXev5i/2DGiQG1pW//nE=) 2026-03-18 01:40:01.704411 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCAhcoYoVElgGLH+omRv0h4tAuPA0YeNCLAUu9sUAPN9UndyrrSr8PVmMbDyx6z6gV4jViWM8UreNE5XeyNx/Js/JQKOdfpVd4+3rTQi8+5oh/humtXlsO/vyAHpuEkwFNAC8pAQXa4N0q6ekzpAEL8vqZiNW1Oe9dUmq2uCBwmyrjMVjrtSNSt3nELJhaIO6ir7Z3pevRcOleehDVAM+xLJFqkCtHIZmNTL12YVTqD1tQKfmcTqoIK/dBL30rsX9LXMHEqMM2fzsG9g4Tg4BvJvRCcISwMHn1SaMxIU//5hWqYRuNwRgDgWlsYqNWnS47x/WpyWE/n1qkPh2fUsne+oVVffS2YN3jVep40RUFGZV5Ih/Dlsyk3Ap1Zxxj3qUeybnV20o9cQHx6FWDbx+qvKruqnoIdxKrdoNS+gizC1G9cIxwhQkwM1mwbkdS7AhwqSAOkQPcdjGlRnHUtottp/tY6psg4wbC514Cm5pdRfHT9Khq/1Wa1ae0fJNIundk=) 2026-03-18 01:40:01.704494 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILNkM6OYc1ejes8iJ44/4gMT42YcVqELzMokYXimIfIt) 2026-03-18 01:40:01.704506 | orchestrator | 2026-03-18 01:40:01.704517 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-18 01:40:01.704528 | orchestrator | Wednesday 18 March 2026 01:39:51 +0000 (0:00:01.171) 0:00:14.729 ******* 2026-03-18 01:40:01.704538 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-18 01:40:01.704548 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-18 01:40:01.704558 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-18 01:40:01.704567 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-18 01:40:01.704577 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-18 01:40:01.704586 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-18 01:40:01.704596 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-18 01:40:01.704605 | orchestrator | 2026-03-18 01:40:01.704615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-18 01:40:01.704626 | orchestrator | Wednesday 18 March 2026 01:39:56 +0000 (0:00:05.538) 0:00:20.268 ******* 2026-03-18 01:40:01.704637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-18 01:40:01.704649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-18 01:40:01.704658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-18 01:40:01.704668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-18 01:40:01.704678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-18 01:40:01.704687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-18 01:40:01.704697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-18 01:40:01.704707 | orchestrator | 2026-03-18 01:40:01.704716 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:01.704726 | orchestrator | Wednesday 18 March 2026 01:39:56 +0000 (0:00:00.185) 0:00:20.453 ******* 2026-03-18 01:40:01.704736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH/0YxzzhkENcJTgrL0t3TEi1pgoLjufH/aGonZBEVzC) 2026-03-18 01:40:01.704776 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzNvgtg0WwTxv1cPfED0GgJ9gwjdRmZdDqcfDMNmY7Bw/4oYlYF/fkZewz8mdKY6/bX0mwq5wZJe15ZhCDHppSv9Ob/nSdBJd+IpRC1Md0G4uij3+MlXuVanUw8N7Sh8IST51iYIFbz1qjlWt1C/NPcLbQ9iqJEe9OisLQB/Eq/6m6oLHNB9eJF1eb3AKyRSkb8kl4E8SE1yp966lxvslxkbivl843xthY0UthvhpMjujyojMSr+iPbOFxgfUvLi//YTrLlglDfkmdMfOAZ7sNcZHySAPXLEYmGhO3lzDNr+ejl7hDVD7x07viCUqhXv0liodbjBwRdzsspqt9L6aSE/Ijn/XdUWYc6Ht3psuyTAyJnVX247wegiXxSTGANKXHpZhuPhHh/PJk81V3eg/3aHv0cxCb8V4ZoMIM0aUHsaTACsE2z7lYG7mdTTCeQfO5mnZTNAUnocuE5bHrraeYJZPOTjfaoEGUVUELvMR/aoi40qsr5WUeJQBotPQS7xs=) 2026-03-18 01:40:01.704801 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGDklAd6QyP2hL2YOAL4foJ0JWgg/waXP9YT8vyfAe4QxtJhhKQUXCxKtUNeqSfZlJCeG43v78+lt9b7uF4Omm4=) 2026-03-18 01:40:01.704820 | orchestrator | 2026-03-18 01:40:01.704832 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:01.704844 | orchestrator | Wednesday 18 March 2026 01:39:58 +0000 (0:00:01.187) 0:00:21.640 ******* 2026-03-18 01:40:01.704856 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFHY5/UZN1kzOWWEIMpUl/mOf9nO+OYzYyJQW3fqXNW/) 2026-03-18 01:40:01.704873 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7xtMi3sJ+hqW2aWar2TDMdWkNqaq1kC8RqzSfI8ykl17JhP5KV3rZlg3Vbo3c3s6xOILlHHQyX33HuWz9I37xDkFD5blpvAP6yVrnpnhRwUyPTLSsVlqpacF8Y2Ydr0IScyUJ/GdZvYeKrHHvwuIdjUIdSh0VSbq17DBBNvhQgktyu+8ZkD6BsSKLRt8te7rIdz462XKprSsbTthPMhs4NYAvH52phFshD5oX6cLxJLcZiZ4Ii0aizk20OHhuyu/xdGBVPj76bnNAGiFnzrJko6kgCBNJ1V1iaC3IFNmVJN6lTE6QdLsnYKmyJ9CPwJvpBHAUc5lHmYf28IU95ZmyqGFhmjX/yHgM7NPdABp+nro4C5QiAesPeAImcmMUw5+EUYe+TbDPjKAcFY/jysiycTxR9J9EdHnQrQTGa71iS9iIWnsT3fcjcU8wuMdfyChKT/uTQwAnjsPb3Fs+6WsQlJaYq1ztbnpg019dNzbWm3hc3wWvriaAq/KFuh1b2KM=) 2026-03-18 01:40:01.704885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOGrF53WUarW4gLLdZxTlchj05XaSCJ1mSpt+Q/UDeIBvsBTBIZy1ywnUGwbGE26ClFgQILMqQynyJmIPy23/VA=) 2026-03-18 01:40:01.704897 | orchestrator | 2026-03-18 01:40:01.704909 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:01.704920 | orchestrator | Wednesday 18 March 2026 01:39:59 +0000 (0:00:01.181) 0:00:22.821 ******* 2026-03-18 01:40:01.704931 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCxHG4D3YEXQXVzzy3XtmvcllXKvaWy/vfaXNme8mIjNzMq93XObo3GQlTBl+f6BrOU8dgNlUQVV5Kib/j40NHs=) 2026-03-18 01:40:01.704943 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDLUpU0aARxWI9YJcPnX6gmefrDcsf4a0dNZcfuNE3M1) 2026-03-18 01:40:01.704955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH5UUkrQNbhKHTzXB1uTOZZjgCBjkRsxgYDV/SH2SOCZg/v4jGkBXzTSJEwm9B8RU4uPU/D27IiONVIQlkW8wOayUSATn1SrqGOntUkbrCz6Q5Fmm6QMthTiOHOgqf2nuXX/xPetD/BWVTZRo5rOcUa6JvbHTz6j0pni/KO5UCR5O4yeZJSKmPB3E4yfu9M3bvwe2+k9Rc4Dp4deVAgy/FJsdyNKr/ZeDmea9ZRA/fjMNlHllbjHYJKAaPDlDOrbSgGVHizD8HYxkIwjkKUEqkFJ4nfjFW/1jxpyaS57xD4qi88l4CJcKI/IaWeQts7ThKs9i5vjg1mSlZ3bn3wffopHUFufvxvVOZAToyV/PJiiKPLvPU1uMWelFmKsPelyZa1gMbpK+kECq/tccZwju6iZQ9qbLkHbnZ2dWiXAf00R3zlQBobqTpWINkR3vUG49PgL0WtVWVicI3c/HwFaiAAND5y1hp3ifJjEhKG/RbSfJiOuyfNyK0UXGtZs4xX+M=) 2026-03-18 01:40:01.704967 | orchestrator | 2026-03-18 01:40:01.704979 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:01.704990 | orchestrator | Wednesday 18 March 2026 01:40:00 +0000 (0:00:01.205) 0:00:24.027 ******* 2026-03-18 01:40:01.705001 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINP8TLhBpSq0gpvQd9yChz7O+glafV/v71VWdizPKMrF) 2026-03-18 01:40:01.705012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVd38XxD3i4gg35v3HvNOipzn1+FvznYe/jSxyFtpEYhx2w05REWXKjKqK0stEx9TQTCTIV3dHI2ETNYemhKVqZ0SUGDwZUkFOlK3q6EZdn9XmdDlclJcZ3lNWTWYBp2xyVZPMC93+HpNqaZI8LPggij+eYQ5FuqhRhb7amQT0mzpjv4uUBSs0bYy3/PDw+HLyvvXoM8BC1MI210p8trgjMLafxq1EGpW1d/LdUGLX1sLOq1J6mvp+ZGbOhrgM7qcP/0ATp91HDGbKr7ZPsRFEeWcJvY1nV6tAZyfdKhiEqrVVaHzIyDG1Ht6coT1p6NuTnkOc7sjC6s6bYR18U0IibpERt20z250qzSmSxoWAOEHmzWf9vv49V+V3+h9MJGHVzySkl0bFyoy73IFE5o4bsO7FfGYtfydFP0OaRNa44nqxokjm/IUFYdQ+sS7PhHOnz7KE56ncDwS1DJb2F3MyWIEmjHRvVDnn6ZY9eVzcT/MDsHqQJ/DJDe3Y4ztq3Ec=) 2026-03-18 01:40:01.705032 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBlOvyBt3blibPw99xznQ1msbK3Iyspu3JDMy6Fq9oEyZ1PK9gJIRnAwLJS4GGJ6JTtNAWkLdzmarTSqIUOLcD0=) 2026-03-18 01:40:06.709884 | orchestrator | 2026-03-18 01:40:06.709975 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:06.709990 | orchestrator | Wednesday 18 March 2026 01:40:01 +0000 (0:00:01.163) 0:00:25.191 ******* 2026-03-18 01:40:06.710001 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFua3y+n8tSDvjxiHAm/JBpMHUd3oULn8NbYiUMqSoPZ5HcSfGsk4erziwj65JpmeMuLwvG63r0yXGxm7kH1Uyo=) 2026-03-18 01:40:06.710068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpWZ+furmZv5y3eM5Jmkobk6gbIL5e796M18+dpNU+OmXRMropaiuJghCIm0NWMPTuNttS2JiWN9bGGFOhMBiQQgsuRCAXDpKJsYjClpbWyBQnoOppf1Wam2RtwK2t6XAiZesNS1skdqAAXPtRYPPBhliWZxXE30/pJVtOlp6GamfzXat5jBzXwyCPe18ilyPfvWzN0o2EE1eCRuO85Y4tNeQmYBg8KJyvY5fraZh4ncU5kvY7jbWm8cF1l0VipZxPtSnrN2W6JDqCjFUCkMikfaB6W4LDMtU2Q2deuuO7R8b7aDr5ZeB1QpR3/gIgdljragN7Vbwv0LAEAoqpj2IdmpFuaEaXwEwEwzTS1oWZ2jb23MB/ZC3/dwckZ01JPo1S0eU5vld3ejLD18TMXFhv4h5jN4qMKHZ7vjYDDRSmt4K3QOdNcmYQZm6LOstUpJLjDcOb8ABkSwawEHwwp4LvryZ7UyPmbYXue5mEza5SKEL9XghmelTLiFHs4IaiqRU=) 2026-03-18 01:40:06.710079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINrOJLr+Lrz8uMIxjALLFNb6V+0D5xUU3h29TIud2t2H) 2026-03-18 01:40:06.710086 | orchestrator | 2026-03-18 01:40:06.710092 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:06.710098 | orchestrator | Wednesday 18 March 2026 01:40:02 +0000 (0:00:01.178) 0:00:26.370 ******* 2026-03-18 01:40:06.710103 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGOwagIwZTc3phyCqnt+TGyf0S1hlx4f4+Va1XqwIE35PI4svqbaQRtQiZEx0qt/wjSSu7X8wajAa7pDbi27QBY=) 2026-03-18 01:40:06.710109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC28O2ElvAWrJy1eccgQ5lmgR1S0i/6vhx8HdkMq9H0E/vxd+5N3hujEFXsBEfuVwX3fsB9yiYenBIdwQ7sQVNcotn/8BPJI7XZkUoJrMZHw7y4MJzXosXtwMB26Jowc2mmZhxC4COjdYBB01JP1HzH1V/DuDfXhlx0ZeWVMsPp4rEXKf0PBc3lSzE24H/cV1WyScH/oEoApOcBvUQm9BDiCmT8ljmK31q05CSoV72N6v9GjAKb6UdvDWfmoDamJFwcuHFsWwkvt7KOb9y6ShYwTc+L+ASMH9j70gKIB0nCWNCp9Ft9aRokn4ZTNf/d3Y56LGTb2zIF6VUj8hZH/FT34gVmUZeXEUlsUmIHjj1T7NWkz3vMUvJjVHLzxB71NGaYWFAnRHeenux5nD9HnawLxYu31nHYoTiftxnn9Gjl/hdnOwwyAXJW1qvwnnRmHfm7U2klXMB1nk3WqP0XxzSe3JVpSuZb0yx1x9kTzuA9x/McAmSU9aj73SFVFlZOKgc=) 2026-03-18 01:40:06.710115 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIP9dLr2V5KI5+H9MnKyO/9jTZrRGPsDAvqsTLglJA9Jd) 2026-03-18 01:40:06.710120 | orchestrator | 2026-03-18 01:40:06.710148 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-18 01:40:06.710154 | orchestrator | Wednesday 18 March 2026 01:40:04 +0000 (0:00:01.191) 0:00:27.561 ******* 2026-03-18 01:40:06.710160 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILNkM6OYc1ejes8iJ44/4gMT42YcVqELzMokYXimIfIt) 2026-03-18 01:40:06.710180 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCAhcoYoVElgGLH+omRv0h4tAuPA0YeNCLAUu9sUAPN9UndyrrSr8PVmMbDyx6z6gV4jViWM8UreNE5XeyNx/Js/JQKOdfpVd4+3rTQi8+5oh/humtXlsO/vyAHpuEkwFNAC8pAQXa4N0q6ekzpAEL8vqZiNW1Oe9dUmq2uCBwmyrjMVjrtSNSt3nELJhaIO6ir7Z3pevRcOleehDVAM+xLJFqkCtHIZmNTL12YVTqD1tQKfmcTqoIK/dBL30rsX9LXMHEqMM2fzsG9g4Tg4BvJvRCcISwMHn1SaMxIU//5hWqYRuNwRgDgWlsYqNWnS47x/WpyWE/n1qkPh2fUsne+oVVffS2YN3jVep40RUFGZV5Ih/Dlsyk3Ap1Zxxj3qUeybnV20o9cQHx6FWDbx+qvKruqnoIdxKrdoNS+gizC1G9cIxwhQkwM1mwbkdS7AhwqSAOkQPcdjGlRnHUtottp/tY6psg4wbC514Cm5pdRfHT9Khq/1Wa1ae0fJNIundk=) 2026-03-18 01:40:06.710186 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdzQ8xcmxTQi+bpUcr3+OGk8mM+1fdyZEdYakdzIqHiLV8d7OTw8dWkl+6ZTW6JRbDGXev5i/2DGiQG1pW//nE=) 2026-03-18 01:40:06.710192 | orchestrator | 2026-03-18 01:40:06.710198 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-18 01:40:06.710219 | orchestrator | Wednesday 18 March 2026 01:40:05 +0000 (0:00:01.223) 0:00:28.785 ******* 2026-03-18 01:40:06.710225 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-18 01:40:06.710231 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-18 01:40:06.710236 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-18 01:40:06.710241 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-18 01:40:06.710247 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-18 01:40:06.710252 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-18 01:40:06.710257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-18 01:40:06.710263 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:40:06.710268 | orchestrator | 2026-03-18 01:40:06.710287 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-18 01:40:06.710293 | orchestrator | Wednesday 18 March 2026 01:40:05 +0000 (0:00:00.192) 0:00:28.978 ******* 2026-03-18 01:40:06.710299 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:40:06.710304 | orchestrator | 2026-03-18 01:40:06.710309 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-18 01:40:06.710315 | orchestrator | Wednesday 18 March 2026 01:40:05 +0000 (0:00:00.066) 0:00:29.044 ******* 2026-03-18 01:40:06.710320 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:40:06.710325 | orchestrator | 2026-03-18 01:40:06.710331 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-18 01:40:06.710336 | orchestrator | Wednesday 18 March 2026 01:40:05 +0000 (0:00:00.059) 0:00:29.104 ******* 2026-03-18 01:40:06.710341 | orchestrator | changed: [testbed-manager] 2026-03-18 01:40:06.710347 | orchestrator | 2026-03-18 01:40:06.710352 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:40:06.710362 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 01:40:06.710372 | orchestrator | 2026-03-18 01:40:06.710381 | orchestrator | 2026-03-18 01:40:06.710390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:40:06.710400 | orchestrator | Wednesday 18 March 2026 01:40:06 +0000 (0:00:00.798) 0:00:29.902 ******* 2026-03-18 01:40:06.710414 | orchestrator | =============================================================================== 2026-03-18 01:40:06.710424 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.24s 2026-03-18 01:40:06.710454 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.54s 2026-03-18 01:40:06.710461 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.28s 2026-03-18 01:40:06.710468 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-18 01:40:06.710475 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-03-18 01:40:06.710481 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-18 01:40:06.710487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-18 01:40:06.710493 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-18 01:40:06.710500 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-18 01:40:06.710506 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-18 01:40:06.710512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-03-18 01:40:06.710519 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-03-18 01:40:06.710525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-03-18 01:40:06.710531 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-18 01:40:06.710543 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-18 01:40:06.710550 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-18 01:40:06.710556 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.80s 2026-03-18 01:40:06.710562 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-03-18 01:40:06.710569 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-03-18 01:40:06.710576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-18 01:40:07.095810 | orchestrator | + osism apply squid 2026-03-18 01:40:19.379408 | orchestrator | 2026-03-18 01:40:19 | INFO  | Task 4baf5b38-6a82-462f-a140-92e9962fab3b (squid) was prepared for execution. 2026-03-18 01:40:19.379573 | orchestrator | 2026-03-18 01:40:19 | INFO  | It takes a moment until task 4baf5b38-6a82-462f-a140-92e9962fab3b (squid) has been started and output is visible here. 2026-03-18 01:42:16.522384 | orchestrator | 2026-03-18 01:42:16.522502 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-18 01:42:16.522526 | orchestrator | 2026-03-18 01:42:16.522558 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-18 01:42:16.522664 | orchestrator | Wednesday 18 March 2026 01:40:23 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-03-18 01:42:16.522676 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 01:42:16.522689 | orchestrator | 2026-03-18 01:42:16.522700 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-18 01:42:16.522711 | orchestrator | Wednesday 18 March 2026 01:40:24 +0000 (0:00:00.094) 0:00:00.268 ******* 2026-03-18 01:42:16.522722 | orchestrator | ok: [testbed-manager] 2026-03-18 01:42:16.522734 | orchestrator | 2026-03-18 01:42:16.522745 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-18 01:42:16.522756 | orchestrator | Wednesday 18 March 2026 01:40:25 +0000 (0:00:01.748) 0:00:02.016 ******* 2026-03-18 01:42:16.522768 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-18 01:42:16.522779 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-18 01:42:16.522790 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-18 01:42:16.522801 | orchestrator | 2026-03-18 01:42:16.522812 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-18 01:42:16.522822 | orchestrator | Wednesday 18 March 2026 01:40:27 +0000 (0:00:01.341) 0:00:03.358 ******* 2026-03-18 01:42:16.522833 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-18 01:42:16.522844 | orchestrator | 2026-03-18 01:42:16.522855 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-18 01:42:16.522866 | orchestrator | Wednesday 18 March 2026 01:40:28 +0000 (0:00:01.120) 0:00:04.478 ******* 2026-03-18 01:42:16.522876 | orchestrator | ok: [testbed-manager] 2026-03-18 01:42:16.522887 | orchestrator | 2026-03-18 01:42:16.522898 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-18 01:42:16.522909 | orchestrator | Wednesday 18 March 2026 01:40:28 +0000 (0:00:00.361) 0:00:04.840 ******* 2026-03-18 01:42:16.522922 | orchestrator | changed: [testbed-manager] 2026-03-18 01:42:16.522936 | orchestrator | 2026-03-18 01:42:16.522948 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-18 01:42:16.522961 | orchestrator | Wednesday 18 March 2026 01:40:29 +0000 (0:00:01.039) 0:00:05.879 ******* 2026-03-18 01:42:16.522974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-18 01:42:16.522992 | orchestrator | ok: [testbed-manager] 2026-03-18 01:42:16.523005 | orchestrator | 2026-03-18 01:42:16.523017 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-18 01:42:16.523059 | orchestrator | Wednesday 18 March 2026 01:41:02 +0000 (0:00:33.319) 0:00:39.198 ******* 2026-03-18 01:42:16.523073 | orchestrator | changed: [testbed-manager] 2026-03-18 01:42:16.523085 | orchestrator | 2026-03-18 01:42:16.523098 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-18 01:42:16.523111 | orchestrator | Wednesday 18 March 2026 01:41:15 +0000 (0:00:12.436) 0:00:51.635 ******* 2026-03-18 01:42:16.523123 | orchestrator | Pausing for 60 seconds 2026-03-18 01:42:16.523135 | orchestrator | changed: [testbed-manager] 2026-03-18 01:42:16.523148 | orchestrator | 2026-03-18 01:42:16.523161 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-18 01:42:16.523173 | orchestrator | Wednesday 18 March 2026 01:42:15 +0000 (0:01:00.100) 0:01:51.736 ******* 2026-03-18 01:42:16.523185 | orchestrator | ok: [testbed-manager] 2026-03-18 01:42:16.523198 | orchestrator | 2026-03-18 01:42:16.523210 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-18 01:42:16.523222 | orchestrator | Wednesday 18 March 2026 01:42:15 +0000 (0:00:00.067) 0:01:51.804 ******* 2026-03-18 01:42:16.523235 | orchestrator | changed: [testbed-manager] 2026-03-18 01:42:16.523247 | orchestrator | 2026-03-18 01:42:16.523260 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:42:16.523272 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:42:16.523285 | orchestrator | 2026-03-18 01:42:16.523297 | orchestrator | 2026-03-18 01:42:16.523308 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:42:16.523319 | orchestrator | Wednesday 18 March 2026 01:42:16 +0000 (0:00:00.653) 0:01:52.457 ******* 2026-03-18 01:42:16.523329 | orchestrator | =============================================================================== 2026-03-18 01:42:16.523340 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-18 01:42:16.523351 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 33.32s 2026-03-18 01:42:16.523381 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2026-03-18 01:42:16.523392 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.75s 2026-03-18 01:42:16.523403 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.34s 2026-03-18 01:42:16.523413 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-03-18 01:42:16.523424 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.04s 2026-03-18 01:42:16.523435 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2026-03-18 01:42:16.523469 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2026-03-18 01:42:16.523480 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-18 01:42:16.523491 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-18 01:42:16.854697 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-18 01:42:16.855915 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-18 01:42:16.919983 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 01:42:16.920077 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-18 01:42:16.926747 | orchestrator | + set -e 2026-03-18 01:42:16.927013 | orchestrator | + NAMESPACE=kolla/release 2026-03-18 01:42:16.927045 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-18 01:42:16.932989 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-18 01:42:17.002119 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-18 01:42:17.002641 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-18 01:42:29.179157 | orchestrator | 2026-03-18 01:42:29 | INFO  | Task 0440f008-8845-48fd-b9b4-d2a317ccbbed (operator) was prepared for execution. 2026-03-18 01:42:29.179259 | orchestrator | 2026-03-18 01:42:29 | INFO  | It takes a moment until task 0440f008-8845-48fd-b9b4-d2a317ccbbed (operator) has been started and output is visible here. 2026-03-18 01:42:45.281375 | orchestrator | 2026-03-18 01:42:45.281498 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-18 01:42:45.281510 | orchestrator | 2026-03-18 01:42:45.281518 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 01:42:45.281526 | orchestrator | Wednesday 18 March 2026 01:42:33 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-03-18 01:42:45.281534 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:42:45.281542 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:42:45.281550 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:42:45.281557 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:42:45.281564 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:42:45.281571 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:42:45.281578 | orchestrator | 2026-03-18 01:42:45.281621 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-18 01:42:45.281629 | orchestrator | Wednesday 18 March 2026 01:42:36 +0000 (0:00:03.342) 0:00:03.492 ******* 2026-03-18 01:42:45.281636 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:42:45.281643 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:42:45.281651 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:42:45.281657 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:42:45.281665 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:42:45.281672 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:42:45.281679 | orchestrator | 2026-03-18 01:42:45.281686 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-18 01:42:45.281693 | orchestrator | 2026-03-18 01:42:45.281700 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-18 01:42:45.281708 | orchestrator | Wednesday 18 March 2026 01:42:37 +0000 (0:00:00.826) 0:00:04.318 ******* 2026-03-18 01:42:45.281715 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:42:45.281722 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:42:45.281730 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:42:45.281737 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:42:45.281744 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:42:45.281751 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:42:45.281759 | orchestrator | 2026-03-18 01:42:45.281766 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-18 01:42:45.281788 | orchestrator | Wednesday 18 March 2026 01:42:37 +0000 (0:00:00.213) 0:00:04.532 ******* 2026-03-18 01:42:45.281795 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:42:45.281802 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:42:45.281809 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:42:45.281816 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:42:45.281823 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:42:45.281830 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:42:45.281837 | orchestrator | 2026-03-18 01:42:45.281844 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-18 01:42:45.281851 | orchestrator | Wednesday 18 March 2026 01:42:38 +0000 (0:00:00.219) 0:00:04.752 ******* 2026-03-18 01:42:45.281859 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:45.281867 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:45.281874 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:45.281881 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:45.281888 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:45.281896 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:45.281903 | orchestrator | 2026-03-18 01:42:45.281910 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-18 01:42:45.281917 | orchestrator | Wednesday 18 March 2026 01:42:38 +0000 (0:00:00.654) 0:00:05.406 ******* 2026-03-18 01:42:45.281924 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:45.281933 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:45.281941 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:45.281949 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:45.281957 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:45.281966 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:45.281990 | orchestrator | 2026-03-18 01:42:45.281999 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-18 01:42:45.282006 | orchestrator | Wednesday 18 March 2026 01:42:39 +0000 (0:00:00.799) 0:00:06.206 ******* 2026-03-18 01:42:45.282013 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-18 01:42:45.282073 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-18 01:42:45.282081 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-18 01:42:45.282088 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-18 01:42:45.282095 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-18 01:42:45.282102 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-18 01:42:45.282109 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-18 01:42:45.282116 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-18 01:42:45.282123 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-18 01:42:45.282130 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-18 01:42:45.282137 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-18 01:42:45.282145 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-18 01:42:45.282152 | orchestrator | 2026-03-18 01:42:45.282159 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-18 01:42:45.282166 | orchestrator | Wednesday 18 March 2026 01:42:40 +0000 (0:00:01.186) 0:00:07.392 ******* 2026-03-18 01:42:45.282173 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:45.282180 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:45.282187 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:45.282194 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:45.282201 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:45.282208 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:45.282216 | orchestrator | 2026-03-18 01:42:45.282224 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-18 01:42:45.282232 | orchestrator | Wednesday 18 March 2026 01:42:41 +0000 (0:00:01.164) 0:00:08.557 ******* 2026-03-18 01:42:45.282239 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-18 01:42:45.282246 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-18 01:42:45.282257 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-18 01:42:45.282271 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282303 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282318 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282331 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282343 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282355 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-18 01:42:45.282367 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282380 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282391 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282403 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282416 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282429 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-18 01:42:45.282442 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282454 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282468 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282481 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282491 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282508 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-18 01:42:45.282515 | orchestrator | 2026-03-18 01:42:45.282523 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-18 01:42:45.282531 | orchestrator | Wednesday 18 March 2026 01:42:43 +0000 (0:00:01.174) 0:00:09.731 ******* 2026-03-18 01:42:45.282538 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:45.282545 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:45.282553 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:45.282560 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:45.282567 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:45.282574 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:45.282609 | orchestrator | 2026-03-18 01:42:45.282623 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-18 01:42:45.282635 | orchestrator | Wednesday 18 March 2026 01:42:43 +0000 (0:00:00.149) 0:00:09.881 ******* 2026-03-18 01:42:45.282645 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:45.282657 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:45.282669 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:45.282681 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:45.282692 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:45.282704 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:45.282712 | orchestrator | 2026-03-18 01:42:45.282719 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-18 01:42:45.282726 | orchestrator | Wednesday 18 March 2026 01:42:43 +0000 (0:00:00.199) 0:00:10.081 ******* 2026-03-18 01:42:45.282733 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:45.282740 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:45.282747 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:45.282754 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:45.282761 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:45.282773 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:45.282781 | orchestrator | 2026-03-18 01:42:45.282789 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-18 01:42:45.282796 | orchestrator | Wednesday 18 March 2026 01:42:44 +0000 (0:00:00.614) 0:00:10.695 ******* 2026-03-18 01:42:45.282803 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:45.282810 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:45.282817 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:45.282824 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:45.282831 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:45.282838 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:45.282845 | orchestrator | 2026-03-18 01:42:45.282852 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-18 01:42:45.282868 | orchestrator | Wednesday 18 March 2026 01:42:44 +0000 (0:00:00.179) 0:00:10.874 ******* 2026-03-18 01:42:45.282876 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 01:42:45.282883 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:45.282890 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 01:42:45.282897 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-18 01:42:45.282905 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 01:42:45.282912 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:45.282919 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:45.282926 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:45.282933 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-18 01:42:45.282940 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 01:42:45.282947 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:45.282954 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:45.282961 | orchestrator | 2026-03-18 01:42:45.282968 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-18 01:42:45.282975 | orchestrator | Wednesday 18 March 2026 01:42:44 +0000 (0:00:00.684) 0:00:11.559 ******* 2026-03-18 01:42:45.282988 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:45.282995 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:45.283002 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:45.283009 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:45.283016 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:45.283023 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:45.283030 | orchestrator | 2026-03-18 01:42:45.283037 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-18 01:42:45.283044 | orchestrator | Wednesday 18 March 2026 01:42:45 +0000 (0:00:00.181) 0:00:11.741 ******* 2026-03-18 01:42:45.283051 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:45.283058 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:45.283065 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:45.283072 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:45.283087 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:46.711883 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:46.712012 | orchestrator | 2026-03-18 01:42:46.712040 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-18 01:42:46.712147 | orchestrator | Wednesday 18 March 2026 01:42:45 +0000 (0:00:00.172) 0:00:11.913 ******* 2026-03-18 01:42:46.712161 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:46.712171 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:46.712182 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:46.712193 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:46.712204 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:46.712214 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:46.712225 | orchestrator | 2026-03-18 01:42:46.712236 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-18 01:42:46.712247 | orchestrator | Wednesday 18 March 2026 01:42:45 +0000 (0:00:00.166) 0:00:12.080 ******* 2026-03-18 01:42:46.712258 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:42:46.712268 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:42:46.712279 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:42:46.712289 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:42:46.712300 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:42:46.712311 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:42:46.712321 | orchestrator | 2026-03-18 01:42:46.712332 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-18 01:42:46.712342 | orchestrator | Wednesday 18 March 2026 01:42:46 +0000 (0:00:00.672) 0:00:12.752 ******* 2026-03-18 01:42:46.712353 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:42:46.712363 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:42:46.712375 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:42:46.712386 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:42:46.712398 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:42:46.712411 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:42:46.712423 | orchestrator | 2026-03-18 01:42:46.712435 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:42:46.712471 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712484 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712497 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712510 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712522 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712559 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 01:42:46.712571 | orchestrator | 2026-03-18 01:42:46.712611 | orchestrator | 2026-03-18 01:42:46.712631 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:42:46.712645 | orchestrator | Wednesday 18 March 2026 01:42:46 +0000 (0:00:00.297) 0:00:13.050 ******* 2026-03-18 01:42:46.712658 | orchestrator | =============================================================================== 2026-03-18 01:42:46.712670 | orchestrator | Gathering Facts --------------------------------------------------------- 3.34s 2026-03-18 01:42:46.712682 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-03-18 01:42:46.712694 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.17s 2026-03-18 01:42:46.712706 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.16s 2026-03-18 01:42:46.712719 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2026-03-18 01:42:46.712731 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-03-18 01:42:46.712743 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-03-18 01:42:46.712755 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-03-18 01:42:46.712766 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.65s 2026-03-18 01:42:46.712776 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-18 01:42:46.712787 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2026-03-18 01:42:46.712797 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2026-03-18 01:42:46.712808 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2026-03-18 01:42:46.712819 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-03-18 01:42:46.712829 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-03-18 01:42:46.712840 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-18 01:42:46.712851 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-03-18 01:42:46.712862 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-18 01:42:46.712872 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-03-18 01:42:47.153628 | orchestrator | + osism apply --environment custom facts 2026-03-18 01:42:49.303039 | orchestrator | 2026-03-18 01:42:49 | INFO  | Trying to run play facts in environment custom 2026-03-18 01:42:59.413946 | orchestrator | 2026-03-18 01:42:59 | INFO  | Task efb6637a-e3e5-451c-80f5-4f3d2f85bde2 (facts) was prepared for execution. 2026-03-18 01:42:59.414116 | orchestrator | 2026-03-18 01:42:59 | INFO  | It takes a moment until task efb6637a-e3e5-451c-80f5-4f3d2f85bde2 (facts) has been started and output is visible here. 2026-03-18 01:43:42.575716 | orchestrator | 2026-03-18 01:43:42.575822 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-18 01:43:42.575832 | orchestrator | 2026-03-18 01:43:42.575840 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-18 01:43:42.575847 | orchestrator | Wednesday 18 March 2026 01:43:03 +0000 (0:00:00.089) 0:00:00.089 ******* 2026-03-18 01:43:42.575854 | orchestrator | ok: [testbed-manager] 2026-03-18 01:43:42.575861 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.575868 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.575874 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:43:42.575880 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:43:42.575886 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:43:42.575922 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.575930 | orchestrator | 2026-03-18 01:43:42.575936 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-18 01:43:42.575943 | orchestrator | Wednesday 18 March 2026 01:43:05 +0000 (0:00:01.403) 0:00:01.492 ******* 2026-03-18 01:43:42.575949 | orchestrator | ok: [testbed-manager] 2026-03-18 01:43:42.575955 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.575962 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:43:42.575968 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.575974 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:43:42.575980 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.575986 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:43:42.575992 | orchestrator | 2026-03-18 01:43:42.575998 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-18 01:43:42.576004 | orchestrator | 2026-03-18 01:43:42.576011 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-18 01:43:42.576017 | orchestrator | Wednesday 18 March 2026 01:43:06 +0000 (0:00:01.197) 0:00:02.689 ******* 2026-03-18 01:43:42.576023 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576029 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576036 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576042 | orchestrator | 2026-03-18 01:43:42.576048 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-18 01:43:42.576055 | orchestrator | Wednesday 18 March 2026 01:43:06 +0000 (0:00:00.120) 0:00:02.810 ******* 2026-03-18 01:43:42.576061 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576067 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576073 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576079 | orchestrator | 2026-03-18 01:43:42.576085 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-18 01:43:42.576091 | orchestrator | Wednesday 18 March 2026 01:43:06 +0000 (0:00:00.210) 0:00:03.021 ******* 2026-03-18 01:43:42.576098 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576104 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576110 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576116 | orchestrator | 2026-03-18 01:43:42.576122 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-18 01:43:42.576129 | orchestrator | Wednesday 18 March 2026 01:43:06 +0000 (0:00:00.239) 0:00:03.261 ******* 2026-03-18 01:43:42.576137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:43:42.576145 | orchestrator | 2026-03-18 01:43:42.576151 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-18 01:43:42.576157 | orchestrator | Wednesday 18 March 2026 01:43:06 +0000 (0:00:00.202) 0:00:03.463 ******* 2026-03-18 01:43:42.576163 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576169 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576175 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576181 | orchestrator | 2026-03-18 01:43:42.576187 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-18 01:43:42.576193 | orchestrator | Wednesday 18 March 2026 01:43:07 +0000 (0:00:00.494) 0:00:03.958 ******* 2026-03-18 01:43:42.576199 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:43:42.576206 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:43:42.576212 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:43:42.576218 | orchestrator | 2026-03-18 01:43:42.576224 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-18 01:43:42.576230 | orchestrator | Wednesday 18 March 2026 01:43:07 +0000 (0:00:00.142) 0:00:04.101 ******* 2026-03-18 01:43:42.576235 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.576240 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.576245 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.576251 | orchestrator | 2026-03-18 01:43:42.576256 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-18 01:43:42.576266 | orchestrator | Wednesday 18 March 2026 01:43:08 +0000 (0:00:01.099) 0:00:05.200 ******* 2026-03-18 01:43:42.576271 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576277 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576282 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576287 | orchestrator | 2026-03-18 01:43:42.576293 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-18 01:43:42.576298 | orchestrator | Wednesday 18 March 2026 01:43:09 +0000 (0:00:00.440) 0:00:05.641 ******* 2026-03-18 01:43:42.576307 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.576316 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.576325 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.576334 | orchestrator | 2026-03-18 01:43:42.576388 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-18 01:43:42.576396 | orchestrator | Wednesday 18 March 2026 01:43:10 +0000 (0:00:01.066) 0:00:06.707 ******* 2026-03-18 01:43:42.576401 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.576406 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.576412 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.576417 | orchestrator | 2026-03-18 01:43:42.576422 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-18 01:43:42.576428 | orchestrator | Wednesday 18 March 2026 01:43:25 +0000 (0:00:15.290) 0:00:21.998 ******* 2026-03-18 01:43:42.576433 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:43:42.576438 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:43:42.576444 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:43:42.576449 | orchestrator | 2026-03-18 01:43:42.576454 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-18 01:43:42.576474 | orchestrator | Wednesday 18 March 2026 01:43:25 +0000 (0:00:00.088) 0:00:22.086 ******* 2026-03-18 01:43:42.576480 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:43:42.576486 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:43:42.576491 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:43:42.576496 | orchestrator | 2026-03-18 01:43:42.576502 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-18 01:43:42.576507 | orchestrator | Wednesday 18 March 2026 01:43:33 +0000 (0:00:07.799) 0:00:29.885 ******* 2026-03-18 01:43:42.576513 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576518 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576523 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576529 | orchestrator | 2026-03-18 01:43:42.576534 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-18 01:43:42.576539 | orchestrator | Wednesday 18 March 2026 01:43:33 +0000 (0:00:00.463) 0:00:30.349 ******* 2026-03-18 01:43:42.576545 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-18 01:43:42.576551 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-18 01:43:42.576556 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-18 01:43:42.576562 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-18 01:43:42.576570 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-18 01:43:42.576576 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-18 01:43:42.576581 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-18 01:43:42.576586 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-18 01:43:42.576592 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-18 01:43:42.576597 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-18 01:43:42.576602 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-18 01:43:42.576608 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-18 01:43:42.576613 | orchestrator | 2026-03-18 01:43:42.576618 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-18 01:43:42.576682 | orchestrator | Wednesday 18 March 2026 01:43:37 +0000 (0:00:03.546) 0:00:33.895 ******* 2026-03-18 01:43:42.576688 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576694 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576699 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576705 | orchestrator | 2026-03-18 01:43:42.576710 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 01:43:42.576716 | orchestrator | 2026-03-18 01:43:42.576721 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 01:43:42.576727 | orchestrator | Wednesday 18 March 2026 01:43:38 +0000 (0:00:01.426) 0:00:35.321 ******* 2026-03-18 01:43:42.576732 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:43:42.576737 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:43:42.576743 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:43:42.576748 | orchestrator | ok: [testbed-manager] 2026-03-18 01:43:42.576754 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:43:42.576759 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:43:42.576764 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:43:42.576770 | orchestrator | 2026-03-18 01:43:42.576775 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:43:42.576781 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:43:42.576788 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:43:42.576795 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:43:42.576801 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:43:42.576806 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:43:42.576812 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:43:42.576817 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:43:42.576823 | orchestrator | 2026-03-18 01:43:42.576828 | orchestrator | 2026-03-18 01:43:42.576833 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:43:42.576839 | orchestrator | Wednesday 18 March 2026 01:43:42 +0000 (0:00:03.685) 0:00:39.007 ******* 2026-03-18 01:43:42.576844 | orchestrator | =============================================================================== 2026-03-18 01:43:42.576850 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.29s 2026-03-18 01:43:42.576855 | orchestrator | Install required packages (Debian) -------------------------------------- 7.80s 2026-03-18 01:43:42.576860 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2026-03-18 01:43:42.576865 | orchestrator | Copy fact files --------------------------------------------------------- 3.55s 2026-03-18 01:43:42.576871 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.43s 2026-03-18 01:43:42.576876 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-03-18 01:43:42.576885 | orchestrator | Copy fact file ---------------------------------------------------------- 1.20s 2026-03-18 01:43:42.833511 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.10s 2026-03-18 01:43:42.833599 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-03-18 01:43:42.833610 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.49s 2026-03-18 01:43:42.833709 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-18 01:43:42.833719 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-18 01:43:42.833727 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-03-18 01:43:42.833735 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-18 01:43:42.833743 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.20s 2026-03-18 01:43:42.833752 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-18 01:43:42.833760 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-03-18 01:43:42.833781 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-18 01:43:43.161503 | orchestrator | + osism apply bootstrap 2026-03-18 01:43:55.368076 | orchestrator | 2026-03-18 01:43:55 | INFO  | Task 575c4f9f-1deb-4342-9115-a3ade400956d (bootstrap) was prepared for execution. 2026-03-18 01:43:55.368153 | orchestrator | 2026-03-18 01:43:55 | INFO  | It takes a moment until task 575c4f9f-1deb-4342-9115-a3ade400956d (bootstrap) has been started and output is visible here. 2026-03-18 01:44:12.107203 | orchestrator | 2026-03-18 01:44:12.107322 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-18 01:44:12.107339 | orchestrator | 2026-03-18 01:44:12.107351 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-18 01:44:12.107362 | orchestrator | Wednesday 18 March 2026 01:43:59 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-03-18 01:44:12.107374 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:12.107386 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:12.107397 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:12.107407 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:12.107418 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:12.107428 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:12.107439 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:12.107450 | orchestrator | 2026-03-18 01:44:12.107461 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 01:44:12.107472 | orchestrator | 2026-03-18 01:44:12.107483 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 01:44:12.107494 | orchestrator | Wednesday 18 March 2026 01:44:00 +0000 (0:00:00.267) 0:00:00.433 ******* 2026-03-18 01:44:12.107504 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:12.107515 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:12.107526 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:12.107536 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:12.107547 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:12.107557 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:12.107568 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:12.107578 | orchestrator | 2026-03-18 01:44:12.107589 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-18 01:44:12.107600 | orchestrator | 2026-03-18 01:44:12.107610 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 01:44:12.107621 | orchestrator | Wednesday 18 March 2026 01:44:03 +0000 (0:00:03.601) 0:00:04.034 ******* 2026-03-18 01:44:12.107633 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-18 01:44:12.107729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-18 01:44:12.107745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-18 01:44:12.107759 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-18 01:44:12.107771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 01:44:12.107784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-18 01:44:12.107796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 01:44:12.107809 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-18 01:44:12.107821 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-18 01:44:12.107858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 01:44:12.107872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-18 01:44:12.107884 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-18 01:44:12.107897 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 01:44:12.107910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 01:44:12.107921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 01:44:12.107932 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 01:44:12.107943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 01:44:12.107954 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 01:44:12.107965 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-18 01:44:12.107976 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:12.107987 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:12.107997 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 01:44:12.108008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-18 01:44:12.108019 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 01:44:12.108030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 01:44:12.108041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 01:44:12.108051 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-18 01:44:12.108062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 01:44:12.108073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 01:44:12.108083 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 01:44:12.108094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 01:44:12.108105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 01:44:12.108115 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 01:44:12.108126 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 01:44:12.108137 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 01:44:12.108148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 01:44:12.108158 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:12.108169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 01:44:12.108180 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 01:44:12.108191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 01:44:12.108201 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:12.108213 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 01:44:12.108224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 01:44:12.108234 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-18 01:44:12.108245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 01:44:12.108256 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 01:44:12.108267 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:12.108296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 01:44:12.108308 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 01:44:12.108318 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:12.108329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 01:44:12.108340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 01:44:12.108350 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 01:44:12.108361 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 01:44:12.108399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 01:44:12.108410 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:12.108421 | orchestrator | 2026-03-18 01:44:12.108432 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-18 01:44:12.108444 | orchestrator | 2026-03-18 01:44:12.108464 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-18 01:44:12.108483 | orchestrator | Wednesday 18 March 2026 01:44:04 +0000 (0:00:00.522) 0:00:04.556 ******* 2026-03-18 01:44:12.108501 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:12.108518 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:12.108536 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:12.108554 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:12.108570 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:12.108586 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:12.108603 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:12.108621 | orchestrator | 2026-03-18 01:44:12.108640 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-18 01:44:12.108686 | orchestrator | Wednesday 18 March 2026 01:44:05 +0000 (0:00:01.255) 0:00:05.811 ******* 2026-03-18 01:44:12.108704 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:12.108722 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:12.108741 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:12.108758 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:12.108777 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:12.108788 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:12.108798 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:12.108809 | orchestrator | 2026-03-18 01:44:12.108819 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-18 01:44:12.108831 | orchestrator | Wednesday 18 March 2026 01:44:06 +0000 (0:00:01.234) 0:00:07.046 ******* 2026-03-18 01:44:12.108842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:12.108856 | orchestrator | 2026-03-18 01:44:12.108867 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-18 01:44:12.108878 | orchestrator | Wednesday 18 March 2026 01:44:07 +0000 (0:00:00.324) 0:00:07.370 ******* 2026-03-18 01:44:12.108888 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:12.108899 | orchestrator | changed: [testbed-manager] 2026-03-18 01:44:12.108910 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:12.108921 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:12.108932 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:12.108942 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:12.108953 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:12.108964 | orchestrator | 2026-03-18 01:44:12.108975 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-18 01:44:12.108985 | orchestrator | Wednesday 18 March 2026 01:44:09 +0000 (0:00:02.375) 0:00:09.746 ******* 2026-03-18 01:44:12.108996 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:12.109008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:12.109021 | orchestrator | 2026-03-18 01:44:12.109032 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-18 01:44:12.109043 | orchestrator | Wednesday 18 March 2026 01:44:09 +0000 (0:00:00.293) 0:00:10.039 ******* 2026-03-18 01:44:12.109054 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:12.109065 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:12.109075 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:12.109086 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:12.109097 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:12.109107 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:12.109129 | orchestrator | 2026-03-18 01:44:12.109139 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-18 01:44:12.109150 | orchestrator | Wednesday 18 March 2026 01:44:10 +0000 (0:00:01.007) 0:00:11.046 ******* 2026-03-18 01:44:12.109161 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:12.109172 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:12.109182 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:12.109193 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:12.109203 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:12.109214 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:12.109224 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:12.109235 | orchestrator | 2026-03-18 01:44:12.109246 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-18 01:44:12.109256 | orchestrator | Wednesday 18 March 2026 01:44:11 +0000 (0:00:00.613) 0:00:11.660 ******* 2026-03-18 01:44:12.109267 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:12.109277 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:12.109288 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:12.109305 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:12.109316 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:12.109327 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:12.109337 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:12.109348 | orchestrator | 2026-03-18 01:44:12.109359 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-18 01:44:12.109370 | orchestrator | Wednesday 18 March 2026 01:44:11 +0000 (0:00:00.466) 0:00:12.127 ******* 2026-03-18 01:44:12.109381 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:12.109392 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:12.109414 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:24.777094 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:24.777264 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:24.777294 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:24.777313 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:24.777331 | orchestrator | 2026-03-18 01:44:24.777351 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-18 01:44:24.777370 | orchestrator | Wednesday 18 March 2026 01:44:12 +0000 (0:00:00.267) 0:00:12.394 ******* 2026-03-18 01:44:24.777392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:24.777431 | orchestrator | 2026-03-18 01:44:24.777448 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-18 01:44:24.777467 | orchestrator | Wednesday 18 March 2026 01:44:12 +0000 (0:00:00.379) 0:00:12.774 ******* 2026-03-18 01:44:24.777487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:24.777504 | orchestrator | 2026-03-18 01:44:24.777521 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-18 01:44:24.777538 | orchestrator | Wednesday 18 March 2026 01:44:12 +0000 (0:00:00.358) 0:00:13.133 ******* 2026-03-18 01:44:24.777555 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.777572 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.777591 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.777609 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.777626 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.777643 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.777693 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.777711 | orchestrator | 2026-03-18 01:44:24.777730 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-18 01:44:24.777749 | orchestrator | Wednesday 18 March 2026 01:44:14 +0000 (0:00:01.475) 0:00:14.608 ******* 2026-03-18 01:44:24.777805 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:24.777823 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:24.777842 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:24.777859 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:24.777876 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:24.777895 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:24.777913 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:24.777929 | orchestrator | 2026-03-18 01:44:24.777940 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-18 01:44:24.777951 | orchestrator | Wednesday 18 March 2026 01:44:14 +0000 (0:00:00.352) 0:00:14.960 ******* 2026-03-18 01:44:24.777962 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.777973 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.777983 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.777994 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.778005 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.778081 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.778093 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.778104 | orchestrator | 2026-03-18 01:44:24.778115 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-18 01:44:24.778126 | orchestrator | Wednesday 18 March 2026 01:44:15 +0000 (0:00:00.573) 0:00:15.534 ******* 2026-03-18 01:44:24.778137 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:24.778147 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:24.778158 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:24.778169 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:24.778179 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:24.778190 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:24.778201 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:24.778212 | orchestrator | 2026-03-18 01:44:24.778224 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-18 01:44:24.778236 | orchestrator | Wednesday 18 March 2026 01:44:15 +0000 (0:00:00.277) 0:00:15.811 ******* 2026-03-18 01:44:24.778261 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.778284 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:24.778294 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:24.778305 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:24.778316 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:24.778326 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:24.778337 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:24.778348 | orchestrator | 2026-03-18 01:44:24.778359 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-18 01:44:24.778370 | orchestrator | Wednesday 18 March 2026 01:44:16 +0000 (0:00:00.549) 0:00:16.361 ******* 2026-03-18 01:44:24.778380 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.778391 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:24.778401 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:24.778412 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:24.778423 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:24.778433 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:24.778444 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:24.778454 | orchestrator | 2026-03-18 01:44:24.778465 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-18 01:44:24.778476 | orchestrator | Wednesday 18 March 2026 01:44:17 +0000 (0:00:01.144) 0:00:17.506 ******* 2026-03-18 01:44:24.778487 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.778510 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.778522 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.778533 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.778544 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.778554 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.778565 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.778576 | orchestrator | 2026-03-18 01:44:24.778586 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-18 01:44:24.778608 | orchestrator | Wednesday 18 March 2026 01:44:18 +0000 (0:00:01.036) 0:00:18.542 ******* 2026-03-18 01:44:24.778645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:24.778679 | orchestrator | 2026-03-18 01:44:24.778690 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-18 01:44:24.778701 | orchestrator | Wednesday 18 March 2026 01:44:18 +0000 (0:00:00.366) 0:00:18.909 ******* 2026-03-18 01:44:24.778712 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:24.778723 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:24.778733 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:24.778744 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:44:24.778755 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:44:24.778765 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:44:24.778776 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:24.778787 | orchestrator | 2026-03-18 01:44:24.778798 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-18 01:44:24.778808 | orchestrator | Wednesday 18 March 2026 01:44:20 +0000 (0:00:01.320) 0:00:20.229 ******* 2026-03-18 01:44:24.778819 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.778830 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.778841 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.778852 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.778863 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.778874 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.778885 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.778895 | orchestrator | 2026-03-18 01:44:24.778906 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-18 01:44:24.778917 | orchestrator | Wednesday 18 March 2026 01:44:20 +0000 (0:00:00.245) 0:00:20.475 ******* 2026-03-18 01:44:24.778928 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.778939 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.778950 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.778960 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.778971 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.778981 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.778992 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.779003 | orchestrator | 2026-03-18 01:44:24.779014 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-18 01:44:24.779025 | orchestrator | Wednesday 18 March 2026 01:44:20 +0000 (0:00:00.253) 0:00:20.729 ******* 2026-03-18 01:44:24.779035 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.779046 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.779057 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.779068 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.779078 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.779089 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.779099 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.779110 | orchestrator | 2026-03-18 01:44:24.779121 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-18 01:44:24.779132 | orchestrator | Wednesday 18 March 2026 01:44:20 +0000 (0:00:00.253) 0:00:20.982 ******* 2026-03-18 01:44:24.779143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:44:24.779156 | orchestrator | 2026-03-18 01:44:24.779167 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-18 01:44:24.779178 | orchestrator | Wednesday 18 March 2026 01:44:21 +0000 (0:00:00.354) 0:00:21.337 ******* 2026-03-18 01:44:24.779189 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.779199 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.779217 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.779228 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.779239 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.779249 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.779260 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.779271 | orchestrator | 2026-03-18 01:44:24.779285 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-18 01:44:24.779304 | orchestrator | Wednesday 18 March 2026 01:44:21 +0000 (0:00:00.539) 0:00:21.877 ******* 2026-03-18 01:44:24.779331 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:44:24.779350 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:44:24.779370 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:44:24.779389 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:44:24.779408 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:44:24.779426 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:44:24.779446 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:44:24.779466 | orchestrator | 2026-03-18 01:44:24.779487 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-18 01:44:24.779508 | orchestrator | Wednesday 18 March 2026 01:44:21 +0000 (0:00:00.302) 0:00:22.179 ******* 2026-03-18 01:44:24.779527 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.779548 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.779560 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.779571 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.779581 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:44:24.779592 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:44:24.779603 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:44:24.779613 | orchestrator | 2026-03-18 01:44:24.779624 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-18 01:44:24.779634 | orchestrator | Wednesday 18 March 2026 01:44:23 +0000 (0:00:01.101) 0:00:23.280 ******* 2026-03-18 01:44:24.779645 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.779687 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.779699 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.779710 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.779721 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:44:24.779732 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:44:24.779742 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:44:24.779753 | orchestrator | 2026-03-18 01:44:24.779763 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-18 01:44:24.779774 | orchestrator | Wednesday 18 March 2026 01:44:23 +0000 (0:00:00.566) 0:00:23.847 ******* 2026-03-18 01:44:24.779785 | orchestrator | ok: [testbed-manager] 2026-03-18 01:44:24.779796 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:44:24.779817 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:44:24.779828 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:44:24.779851 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.494390 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.494522 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.494536 | orchestrator | 2026-03-18 01:45:06.494547 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-18 01:45:06.494558 | orchestrator | Wednesday 18 March 2026 01:44:24 +0000 (0:00:01.102) 0:00:24.950 ******* 2026-03-18 01:45:06.494567 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.494577 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.494586 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.494594 | orchestrator | changed: [testbed-manager] 2026-03-18 01:45:06.494604 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.494612 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.494621 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.494630 | orchestrator | 2026-03-18 01:45:06.494639 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-18 01:45:06.494648 | orchestrator | Wednesday 18 March 2026 01:44:39 +0000 (0:00:15.227) 0:00:40.177 ******* 2026-03-18 01:45:06.494657 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.494727 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.494744 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.494757 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.494777 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.494793 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.494807 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.494820 | orchestrator | 2026-03-18 01:45:06.494835 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-18 01:45:06.494850 | orchestrator | Wednesday 18 March 2026 01:44:40 +0000 (0:00:00.269) 0:00:40.447 ******* 2026-03-18 01:45:06.494863 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.494877 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.494891 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.494905 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.494918 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.494932 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.494947 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.494963 | orchestrator | 2026-03-18 01:45:06.494979 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-18 01:45:06.494997 | orchestrator | Wednesday 18 March 2026 01:44:40 +0000 (0:00:00.295) 0:00:40.743 ******* 2026-03-18 01:45:06.495014 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.495029 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.495046 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.495063 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.495079 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.495096 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.495114 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.495129 | orchestrator | 2026-03-18 01:45:06.495146 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-18 01:45:06.495161 | orchestrator | Wednesday 18 March 2026 01:44:40 +0000 (0:00:00.241) 0:00:40.984 ******* 2026-03-18 01:45:06.495180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:45:06.495201 | orchestrator | 2026-03-18 01:45:06.495218 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-18 01:45:06.495230 | orchestrator | Wednesday 18 March 2026 01:44:41 +0000 (0:00:00.356) 0:00:41.340 ******* 2026-03-18 01:45:06.495240 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.495250 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.495260 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.495270 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.495281 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.495292 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.495302 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.495310 | orchestrator | 2026-03-18 01:45:06.495319 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-18 01:45:06.495327 | orchestrator | Wednesday 18 March 2026 01:44:42 +0000 (0:00:01.795) 0:00:43.135 ******* 2026-03-18 01:45:06.495337 | orchestrator | changed: [testbed-manager] 2026-03-18 01:45:06.495348 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:45:06.495359 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:45:06.495369 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:45:06.495380 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.495391 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.495425 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.495436 | orchestrator | 2026-03-18 01:45:06.495447 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-18 01:45:06.495457 | orchestrator | Wednesday 18 March 2026 01:44:44 +0000 (0:00:01.079) 0:00:44.215 ******* 2026-03-18 01:45:06.495468 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.495479 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.495490 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.495515 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.495526 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.495536 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.495547 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.495558 | orchestrator | 2026-03-18 01:45:06.495569 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-18 01:45:06.495580 | orchestrator | Wednesday 18 March 2026 01:44:44 +0000 (0:00:00.834) 0:00:45.050 ******* 2026-03-18 01:45:06.495592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:45:06.495605 | orchestrator | 2026-03-18 01:45:06.495632 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-18 01:45:06.495644 | orchestrator | Wednesday 18 March 2026 01:44:45 +0000 (0:00:00.355) 0:00:45.406 ******* 2026-03-18 01:45:06.495655 | orchestrator | changed: [testbed-manager] 2026-03-18 01:45:06.495666 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:45:06.495676 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:45:06.495727 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:45:06.495739 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.495750 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.495760 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.495771 | orchestrator | 2026-03-18 01:45:06.495804 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-18 01:45:06.495816 | orchestrator | Wednesday 18 March 2026 01:44:46 +0000 (0:00:01.189) 0:00:46.595 ******* 2026-03-18 01:45:06.495827 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:45:06.495838 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:45:06.495849 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:45:06.495860 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:45:06.495870 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:45:06.495881 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:45:06.495891 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:45:06.495902 | orchestrator | 2026-03-18 01:45:06.495913 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-18 01:45:06.495923 | orchestrator | Wednesday 18 March 2026 01:44:46 +0000 (0:00:00.270) 0:00:46.865 ******* 2026-03-18 01:45:06.495935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:45:06.495946 | orchestrator | 2026-03-18 01:45:06.495956 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-18 01:45:06.495967 | orchestrator | Wednesday 18 March 2026 01:44:47 +0000 (0:00:00.335) 0:00:47.201 ******* 2026-03-18 01:45:06.495978 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.495988 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.495999 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.496010 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.496021 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.496031 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.496042 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.496052 | orchestrator | 2026-03-18 01:45:06.496063 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-18 01:45:06.496074 | orchestrator | Wednesday 18 March 2026 01:44:48 +0000 (0:00:01.651) 0:00:48.853 ******* 2026-03-18 01:45:06.496085 | orchestrator | changed: [testbed-manager] 2026-03-18 01:45:06.496095 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:45:06.496106 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:45:06.496117 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:45:06.496127 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.496138 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.496148 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.496167 | orchestrator | 2026-03-18 01:45:06.496178 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-18 01:45:06.496194 | orchestrator | Wednesday 18 March 2026 01:44:49 +0000 (0:00:01.151) 0:00:50.004 ******* 2026-03-18 01:45:06.496212 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:45:06.496230 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:45:06.496247 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:45:06.496265 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:45:06.496281 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:45:06.496298 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:45:06.496314 | orchestrator | changed: [testbed-manager] 2026-03-18 01:45:06.496332 | orchestrator | 2026-03-18 01:45:06.496348 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-18 01:45:06.496365 | orchestrator | Wednesday 18 March 2026 01:45:03 +0000 (0:00:13.449) 0:01:03.453 ******* 2026-03-18 01:45:06.496383 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.496401 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.496421 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.496439 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.496458 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.496469 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.496480 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.496490 | orchestrator | 2026-03-18 01:45:06.496501 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-18 01:45:06.496512 | orchestrator | Wednesday 18 March 2026 01:45:04 +0000 (0:00:01.445) 0:01:04.899 ******* 2026-03-18 01:45:06.496522 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.496533 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.496543 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.496554 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.496564 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.496575 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.496585 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.496595 | orchestrator | 2026-03-18 01:45:06.496606 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-18 01:45:06.496616 | orchestrator | Wednesday 18 March 2026 01:45:05 +0000 (0:00:00.945) 0:01:05.844 ******* 2026-03-18 01:45:06.496627 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.496637 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.496648 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.496658 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.496669 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.496735 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.496749 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.496760 | orchestrator | 2026-03-18 01:45:06.496772 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-18 01:45:06.496783 | orchestrator | Wednesday 18 March 2026 01:45:05 +0000 (0:00:00.239) 0:01:06.084 ******* 2026-03-18 01:45:06.496794 | orchestrator | ok: [testbed-manager] 2026-03-18 01:45:06.496805 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:45:06.496816 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:45:06.496827 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:45:06.496838 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:45:06.496848 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:45:06.496859 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:45:06.496870 | orchestrator | 2026-03-18 01:45:06.496889 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-18 01:45:06.496901 | orchestrator | Wednesday 18 March 2026 01:45:06 +0000 (0:00:00.267) 0:01:06.351 ******* 2026-03-18 01:45:06.496912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:45:06.496924 | orchestrator | 2026-03-18 01:45:06.496947 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-18 01:47:24.519193 | orchestrator | Wednesday 18 March 2026 01:45:06 +0000 (0:00:00.317) 0:01:06.669 ******* 2026-03-18 01:47:24.519305 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.519321 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.519331 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.519341 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.519351 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.519360 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.519370 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.519379 | orchestrator | 2026-03-18 01:47:24.519390 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-18 01:47:24.519400 | orchestrator | Wednesday 18 March 2026 01:45:08 +0000 (0:00:01.788) 0:01:08.458 ******* 2026-03-18 01:47:24.519410 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:24.519420 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:24.519429 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:24.519439 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:24.519448 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:24.519458 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:24.519467 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:24.519477 | orchestrator | 2026-03-18 01:47:24.519487 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-18 01:47:24.519497 | orchestrator | Wednesday 18 March 2026 01:45:08 +0000 (0:00:00.704) 0:01:09.162 ******* 2026-03-18 01:47:24.519506 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.519516 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.519525 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.519534 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.519544 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.519553 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.519562 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.519572 | orchestrator | 2026-03-18 01:47:24.519582 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-18 01:47:24.519592 | orchestrator | Wednesday 18 March 2026 01:45:09 +0000 (0:00:00.235) 0:01:09.398 ******* 2026-03-18 01:47:24.519601 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.519611 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.519621 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.519630 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.519640 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.519649 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.519658 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.519668 | orchestrator | 2026-03-18 01:47:24.519677 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-18 01:47:24.519687 | orchestrator | Wednesday 18 March 2026 01:45:10 +0000 (0:00:01.213) 0:01:10.612 ******* 2026-03-18 01:47:24.519696 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:24.519706 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:24.519715 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:24.519725 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:24.519734 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:24.519745 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:24.519756 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:24.519767 | orchestrator | 2026-03-18 01:47:24.519804 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-18 01:47:24.519815 | orchestrator | Wednesday 18 March 2026 01:45:12 +0000 (0:00:01.819) 0:01:12.431 ******* 2026-03-18 01:47:24.519826 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.519836 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.519850 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.519869 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.519887 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.519903 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.519920 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.519938 | orchestrator | 2026-03-18 01:47:24.519962 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-18 01:47:24.520013 | orchestrator | Wednesday 18 March 2026 01:45:14 +0000 (0:00:02.493) 0:01:14.924 ******* 2026-03-18 01:47:24.520034 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.520052 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.520068 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.520078 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.520087 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.520096 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.520105 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.520115 | orchestrator | 2026-03-18 01:47:24.520124 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-18 01:47:24.520133 | orchestrator | Wednesday 18 March 2026 01:45:48 +0000 (0:00:34.234) 0:01:49.158 ******* 2026-03-18 01:47:24.520143 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:24.520152 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:24.520162 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:24.520171 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:24.520180 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:24.520189 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:24.520198 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:24.520208 | orchestrator | 2026-03-18 01:47:24.520217 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-18 01:47:24.520226 | orchestrator | Wednesday 18 March 2026 01:47:08 +0000 (0:01:19.161) 0:03:08.320 ******* 2026-03-18 01:47:24.520236 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:24.520245 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.520255 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.520264 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.520273 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.520283 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.520292 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.520301 | orchestrator | 2026-03-18 01:47:24.520311 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-18 01:47:24.520321 | orchestrator | Wednesday 18 March 2026 01:47:09 +0000 (0:00:01.716) 0:03:10.036 ******* 2026-03-18 01:47:24.520330 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:24.520339 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:24.520349 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:24.520358 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:24.520367 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:24.520377 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:24.520386 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:24.520395 | orchestrator | 2026-03-18 01:47:24.520405 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-18 01:47:24.520415 | orchestrator | Wednesday 18 March 2026 01:47:23 +0000 (0:00:13.307) 0:03:23.344 ******* 2026-03-18 01:47:24.520485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-18 01:47:24.520518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-18 01:47:24.520541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-18 01:47:24.520553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-18 01:47:24.520563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-18 01:47:24.520573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-18 01:47:24.520582 | orchestrator | 2026-03-18 01:47:24.520592 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-18 01:47:24.520602 | orchestrator | Wednesday 18 March 2026 01:47:23 +0000 (0:00:00.485) 0:03:23.830 ******* 2026-03-18 01:47:24.520611 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-18 01:47:24.520621 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:24.520630 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-18 01:47:24.520640 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:24.520650 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-18 01:47:24.520659 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-18 01:47:24.520669 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:24.520678 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:24.520688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 01:47:24.520697 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 01:47:24.520706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 01:47:24.520716 | orchestrator | 2026-03-18 01:47:24.520725 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-18 01:47:24.520735 | orchestrator | Wednesday 18 March 2026 01:47:24 +0000 (0:00:00.772) 0:03:24.602 ******* 2026-03-18 01:47:24.520744 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-18 01:47:24.520759 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-18 01:47:24.520769 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-18 01:47:24.520806 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-18 01:47:24.520822 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-18 01:47:24.520839 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-18 01:47:30.188070 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-18 01:47:30.188184 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-18 01:47:30.188222 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-18 01:47:30.188237 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-18 01:47:30.188248 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-18 01:47:30.188259 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-18 01:47:30.188269 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-18 01:47:30.188280 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-18 01:47:30.188290 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-18 01:47:30.188300 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-18 01:47:30.188311 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-18 01:47:30.188322 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-18 01:47:30.188333 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:30.188345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-18 01:47:30.188355 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-18 01:47:30.188366 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-18 01:47:30.188376 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:30.188387 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-18 01:47:30.188398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-18 01:47:30.188408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-18 01:47:30.188419 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-18 01:47:30.188429 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-18 01:47:30.188439 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-18 01:47:30.188450 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-18 01:47:30.188460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-18 01:47:30.188471 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-18 01:47:30.188481 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-18 01:47:30.188492 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-18 01:47:30.188502 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-18 01:47:30.188513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-18 01:47:30.188524 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:30.188534 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-18 01:47:30.188545 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-18 01:47:30.188556 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-18 01:47:30.188566 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-18 01:47:30.188579 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-18 01:47:30.188600 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-18 01:47:30.188613 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:30.188642 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-18 01:47:30.188655 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-18 01:47:30.188666 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-18 01:47:30.188678 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-18 01:47:30.188691 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-18 01:47:30.188723 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-18 01:47:30.188736 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-18 01:47:30.188749 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-18 01:47:30.188761 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-18 01:47:30.188773 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188817 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188850 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188861 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-18 01:47:30.188871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-18 01:47:30.188881 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-18 01:47:30.188892 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-18 01:47:30.188902 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-18 01:47:30.188913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-18 01:47:30.188923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-18 01:47:30.188933 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-18 01:47:30.188944 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-18 01:47:30.188955 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-18 01:47:30.188965 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-18 01:47:30.188976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-18 01:47:30.188986 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-18 01:47:30.188997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-18 01:47:30.189007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-18 01:47:30.189018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-18 01:47:30.189036 | orchestrator | 2026-03-18 01:47:30.189048 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-18 01:47:30.189058 | orchestrator | Wednesday 18 March 2026 01:47:29 +0000 (0:00:04.753) 0:03:29.356 ******* 2026-03-18 01:47:30.189069 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189080 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189090 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189100 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-18 01:47:30.189142 | orchestrator | 2026-03-18 01:47:30.189153 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-18 01:47:30.189163 | orchestrator | Wednesday 18 March 2026 01:47:29 +0000 (0:00:00.564) 0:03:29.921 ******* 2026-03-18 01:47:30.189174 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:30.189184 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:30.189195 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:30.189211 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:30.189222 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:47:30.189233 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:47:30.189243 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:30.189254 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:47:30.189265 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:30.189275 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:30.189294 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:45.130286 | orchestrator | 2026-03-18 01:47:45.130402 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-18 01:47:45.130418 | orchestrator | Wednesday 18 March 2026 01:47:30 +0000 (0:00:00.442) 0:03:30.364 ******* 2026-03-18 01:47:45.130428 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:45.130440 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:45.130450 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:45.130461 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:45.130470 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:45.130480 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-18 01:47:45.130489 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:45.130499 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:45.130509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:45.130518 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:45.130528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-18 01:47:45.130537 | orchestrator | 2026-03-18 01:47:45.130547 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-18 01:47:45.130580 | orchestrator | Wednesday 18 March 2026 01:47:30 +0000 (0:00:00.601) 0:03:30.966 ******* 2026-03-18 01:47:45.130590 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-18 01:47:45.130600 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:45.130610 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-18 01:47:45.130619 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:47:45.130628 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-18 01:47:45.130638 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:47:45.130647 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-18 01:47:45.130657 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:47:45.130666 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-18 01:47:45.130676 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-18 01:47:45.130685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-18 01:47:45.130695 | orchestrator | 2026-03-18 01:47:45.130705 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-18 01:47:45.130714 | orchestrator | Wednesday 18 March 2026 01:47:32 +0000 (0:00:01.620) 0:03:32.587 ******* 2026-03-18 01:47:45.130724 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:45.130733 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:45.130742 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:45.130751 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:45.130761 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:47:45.130770 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:47:45.130779 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:47:45.130789 | orchestrator | 2026-03-18 01:47:45.130840 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-18 01:47:45.130854 | orchestrator | Wednesday 18 March 2026 01:47:32 +0000 (0:00:00.313) 0:03:32.901 ******* 2026-03-18 01:47:45.130864 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:45.130877 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:45.130887 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:45.130898 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:45.130908 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:45.130919 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:45.130929 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:45.130940 | orchestrator | 2026-03-18 01:47:45.130951 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-18 01:47:45.130962 | orchestrator | Wednesday 18 March 2026 01:47:38 +0000 (0:00:06.040) 0:03:38.941 ******* 2026-03-18 01:47:45.130972 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-18 01:47:45.130984 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:45.130995 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-18 01:47:45.131006 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:45.131017 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-18 01:47:45.131028 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:45.131039 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-18 01:47:45.131050 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-18 01:47:45.131062 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:45.131072 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-18 01:47:45.131101 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:47:45.131112 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:47:45.131123 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-18 01:47:45.131133 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:47:45.131149 | orchestrator | 2026-03-18 01:47:45.131176 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-18 01:47:45.131202 | orchestrator | Wednesday 18 March 2026 01:47:39 +0000 (0:00:00.322) 0:03:39.264 ******* 2026-03-18 01:47:45.131219 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-18 01:47:45.131234 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-18 01:47:45.131249 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-18 01:47:45.131284 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-18 01:47:45.131301 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-18 01:47:45.131317 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-18 01:47:45.131332 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-18 01:47:45.131345 | orchestrator | 2026-03-18 01:47:45.131359 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-18 01:47:45.131373 | orchestrator | Wednesday 18 March 2026 01:47:40 +0000 (0:00:01.343) 0:03:40.607 ******* 2026-03-18 01:47:45.131389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:47:45.131407 | orchestrator | 2026-03-18 01:47:45.131423 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-18 01:47:45.131438 | orchestrator | Wednesday 18 March 2026 01:47:40 +0000 (0:00:00.451) 0:03:41.059 ******* 2026-03-18 01:47:45.131454 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:45.131470 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:45.131486 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:45.131503 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:45.131519 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:45.131531 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:45.131540 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:45.131550 | orchestrator | 2026-03-18 01:47:45.131559 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-18 01:47:45.131569 | orchestrator | Wednesday 18 March 2026 01:47:42 +0000 (0:00:01.371) 0:03:42.431 ******* 2026-03-18 01:47:45.131578 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:45.131588 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:45.131597 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:45.131606 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:45.131616 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:45.131625 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:45.131634 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:45.131644 | orchestrator | 2026-03-18 01:47:45.131654 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-18 01:47:45.131663 | orchestrator | Wednesday 18 March 2026 01:47:42 +0000 (0:00:00.667) 0:03:43.098 ******* 2026-03-18 01:47:45.131673 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:45.131682 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:45.131691 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:45.131701 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:45.131710 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:45.131719 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:45.131728 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:45.131738 | orchestrator | 2026-03-18 01:47:45.131747 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-18 01:47:45.131757 | orchestrator | Wednesday 18 March 2026 01:47:43 +0000 (0:00:00.655) 0:03:43.754 ******* 2026-03-18 01:47:45.131766 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:45.131776 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:45.131785 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:45.131794 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:45.131829 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:45.131839 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:45.131848 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:45.131858 | orchestrator | 2026-03-18 01:47:45.131868 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-18 01:47:45.131887 | orchestrator | Wednesday 18 March 2026 01:47:44 +0000 (0:00:00.581) 0:03:44.336 ******* 2026-03-18 01:47:45.131901 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796855.2262564, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:45.131914 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796834.7511017, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:45.131933 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796857.153234, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:45.131967 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796854.460517, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116225 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796864.1702533, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116345 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796865.3663108, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116364 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773796859.0823586, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116413 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116429 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116464 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116481 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116517 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116533 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116548 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 01:47:50.116576 | orchestrator | 2026-03-18 01:47:50.116594 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-18 01:47:50.116611 | orchestrator | Wednesday 18 March 2026 01:47:45 +0000 (0:00:00.971) 0:03:45.307 ******* 2026-03-18 01:47:50.116627 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:50.116648 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:50.116671 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:50.116692 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:50.116715 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:50.116737 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:50.116758 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:50.116779 | orchestrator | 2026-03-18 01:47:50.116852 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-18 01:47:50.116875 | orchestrator | Wednesday 18 March 2026 01:47:46 +0000 (0:00:01.131) 0:03:46.438 ******* 2026-03-18 01:47:50.116898 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:50.116918 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:50.116939 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:50.116960 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:50.116979 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:50.117002 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:50.117018 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:50.117033 | orchestrator | 2026-03-18 01:47:50.117048 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-18 01:47:50.117060 | orchestrator | Wednesday 18 March 2026 01:47:47 +0000 (0:00:01.209) 0:03:47.648 ******* 2026-03-18 01:47:50.117073 | orchestrator | changed: [testbed-manager] 2026-03-18 01:47:50.117085 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:47:50.117097 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:47:50.117109 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:47:50.117122 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:47:50.117133 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:47:50.117144 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:47:50.117156 | orchestrator | 2026-03-18 01:47:50.117168 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-18 01:47:50.117180 | orchestrator | Wednesday 18 March 2026 01:47:48 +0000 (0:00:01.192) 0:03:48.840 ******* 2026-03-18 01:47:50.117192 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:47:50.117204 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:47:50.117223 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:47:50.117236 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:47:50.117248 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:47:50.117260 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:47:50.117273 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:47:50.117287 | orchestrator | 2026-03-18 01:47:50.117300 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-18 01:47:50.117313 | orchestrator | Wednesday 18 March 2026 01:47:48 +0000 (0:00:00.267) 0:03:49.107 ******* 2026-03-18 01:47:50.117326 | orchestrator | ok: [testbed-manager] 2026-03-18 01:47:50.117341 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:47:50.117353 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:47:50.117365 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:47:50.117378 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:47:50.117391 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:47:50.117404 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:47:50.117418 | orchestrator | 2026-03-18 01:47:50.117430 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-18 01:47:50.117444 | orchestrator | Wednesday 18 March 2026 01:47:49 +0000 (0:00:00.767) 0:03:49.875 ******* 2026-03-18 01:47:50.117460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:47:50.117483 | orchestrator | 2026-03-18 01:47:50.117497 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-18 01:47:50.117521 | orchestrator | Wednesday 18 March 2026 01:47:50 +0000 (0:00:00.421) 0:03:50.296 ******* 2026-03-18 01:49:07.637138 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637264 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:07.637280 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:07.637288 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:07.637295 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:07.637339 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:07.637348 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:07.637356 | orchestrator | 2026-03-18 01:49:07.637365 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-18 01:49:07.637375 | orchestrator | Wednesday 18 March 2026 01:47:58 +0000 (0:00:08.345) 0:03:58.642 ******* 2026-03-18 01:49:07.637383 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637390 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637398 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637405 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637412 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637420 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637427 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637435 | orchestrator | 2026-03-18 01:49:07.637442 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-18 01:49:07.637450 | orchestrator | Wednesday 18 March 2026 01:47:59 +0000 (0:00:01.085) 0:03:59.728 ******* 2026-03-18 01:49:07.637457 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637464 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637470 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637477 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637484 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637491 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637498 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637505 | orchestrator | 2026-03-18 01:49:07.637512 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-18 01:49:07.637520 | orchestrator | Wednesday 18 March 2026 01:48:00 +0000 (0:00:01.091) 0:04:00.819 ******* 2026-03-18 01:49:07.637527 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637534 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637542 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637549 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637557 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637564 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637571 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637578 | orchestrator | 2026-03-18 01:49:07.637585 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-18 01:49:07.637593 | orchestrator | Wednesday 18 March 2026 01:48:00 +0000 (0:00:00.351) 0:04:01.171 ******* 2026-03-18 01:49:07.637600 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637607 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637614 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637622 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637629 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637636 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637644 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637651 | orchestrator | 2026-03-18 01:49:07.637658 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-18 01:49:07.637665 | orchestrator | Wednesday 18 March 2026 01:48:01 +0000 (0:00:00.351) 0:04:01.522 ******* 2026-03-18 01:49:07.637672 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637680 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637688 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637720 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637727 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637733 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637739 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637747 | orchestrator | 2026-03-18 01:49:07.637755 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-18 01:49:07.637763 | orchestrator | Wednesday 18 March 2026 01:48:01 +0000 (0:00:00.330) 0:04:01.852 ******* 2026-03-18 01:49:07.637771 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.637779 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.637787 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.637795 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.637802 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.637809 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.637816 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.637823 | orchestrator | 2026-03-18 01:49:07.637831 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-18 01:49:07.637838 | orchestrator | Wednesday 18 March 2026 01:48:06 +0000 (0:00:05.286) 0:04:07.139 ******* 2026-03-18 01:49:07.637850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:49:07.637860 | orchestrator | 2026-03-18 01:49:07.637868 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-18 01:49:07.637875 | orchestrator | Wednesday 18 March 2026 01:48:07 +0000 (0:00:00.494) 0:04:07.633 ******* 2026-03-18 01:49:07.637909 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.637916 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-18 01:49:07.637924 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.637931 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:07.637939 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-18 01:49:07.637961 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.637969 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-18 01:49:07.637976 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:07.637983 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:07.637991 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.637998 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-18 01:49:07.638005 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.638012 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-18 01:49:07.638065 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:07.638073 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.638079 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-18 01:49:07.638104 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:07.638110 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:07.638116 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-18 01:49:07.638122 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-18 01:49:07.638151 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:07.638158 | orchestrator | 2026-03-18 01:49:07.638165 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-18 01:49:07.638171 | orchestrator | Wednesday 18 March 2026 01:48:07 +0000 (0:00:00.389) 0:04:08.023 ******* 2026-03-18 01:49:07.638179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:49:07.638186 | orchestrator | 2026-03-18 01:49:07.638193 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-18 01:49:07.638209 | orchestrator | Wednesday 18 March 2026 01:48:08 +0000 (0:00:00.536) 0:04:08.559 ******* 2026-03-18 01:49:07.638216 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-18 01:49:07.638223 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-18 01:49:07.638229 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:07.638235 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:07.638241 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-18 01:49:07.638248 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-18 01:49:07.638254 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:07.638261 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:07.638268 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-18 01:49:07.638275 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-18 01:49:07.638282 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:07.638289 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:07.638295 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-18 01:49:07.638301 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:07.638308 | orchestrator | 2026-03-18 01:49:07.638314 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-18 01:49:07.638322 | orchestrator | Wednesday 18 March 2026 01:48:08 +0000 (0:00:00.385) 0:04:08.945 ******* 2026-03-18 01:49:07.638329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:49:07.638336 | orchestrator | 2026-03-18 01:49:07.638343 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-18 01:49:07.638349 | orchestrator | Wednesday 18 March 2026 01:48:09 +0000 (0:00:00.453) 0:04:09.398 ******* 2026-03-18 01:49:07.638355 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:07.638362 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:07.638368 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:07.638373 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:07.638379 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:07.638385 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:07.638391 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:07.638397 | orchestrator | 2026-03-18 01:49:07.638404 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-18 01:49:07.638411 | orchestrator | Wednesday 18 March 2026 01:48:43 +0000 (0:00:34.468) 0:04:43.867 ******* 2026-03-18 01:49:07.638418 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:07.638424 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:07.638431 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:07.638438 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:07.638444 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:07.638451 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:07.638457 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:07.638463 | orchestrator | 2026-03-18 01:49:07.638471 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-18 01:49:07.638484 | orchestrator | Wednesday 18 March 2026 01:48:51 +0000 (0:00:08.266) 0:04:52.133 ******* 2026-03-18 01:49:07.638491 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:07.638497 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:07.638503 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:07.638509 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:07.638515 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:07.638521 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:07.638527 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:07.638533 | orchestrator | 2026-03-18 01:49:07.638540 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-18 01:49:07.638554 | orchestrator | Wednesday 18 March 2026 01:48:59 +0000 (0:00:07.710) 0:04:59.843 ******* 2026-03-18 01:49:07.638560 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:07.638567 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:07.638574 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:07.638580 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:07.638587 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:07.638593 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:07.638599 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:07.638605 | orchestrator | 2026-03-18 01:49:07.638612 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-18 01:49:07.638619 | orchestrator | Wednesday 18 March 2026 01:49:01 +0000 (0:00:01.770) 0:05:01.614 ******* 2026-03-18 01:49:07.638625 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:07.638632 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:07.638638 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:07.638644 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:07.638650 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:07.638657 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:07.638663 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:07.638669 | orchestrator | 2026-03-18 01:49:07.638686 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-18 01:49:19.598364 | orchestrator | Wednesday 18 March 2026 01:49:07 +0000 (0:00:06.192) 0:05:07.806 ******* 2026-03-18 01:49:19.598478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:49:19.598496 | orchestrator | 2026-03-18 01:49:19.598509 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-18 01:49:19.598520 | orchestrator | Wednesday 18 March 2026 01:49:08 +0000 (0:00:00.484) 0:05:08.291 ******* 2026-03-18 01:49:19.598531 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:19.598544 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:19.598554 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:19.598565 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:19.598575 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:19.598602 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:19.598624 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:19.598635 | orchestrator | 2026-03-18 01:49:19.598646 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-18 01:49:19.598657 | orchestrator | Wednesday 18 March 2026 01:49:09 +0000 (0:00:00.929) 0:05:09.220 ******* 2026-03-18 01:49:19.598668 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:19.598680 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:19.598690 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:19.598701 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:19.598711 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:19.598722 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:19.598732 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:19.598743 | orchestrator | 2026-03-18 01:49:19.598754 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-18 01:49:19.598765 | orchestrator | Wednesday 18 March 2026 01:49:10 +0000 (0:00:01.781) 0:05:11.002 ******* 2026-03-18 01:49:19.598776 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:49:19.598787 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:49:19.598798 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:49:19.598808 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:49:19.598819 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:49:19.598831 | orchestrator | changed: [testbed-manager] 2026-03-18 01:49:19.598842 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:49:19.598852 | orchestrator | 2026-03-18 01:49:19.598864 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-18 01:49:19.598875 | orchestrator | Wednesday 18 March 2026 01:49:11 +0000 (0:00:00.820) 0:05:11.822 ******* 2026-03-18 01:49:19.598940 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.598963 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.598985 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.599004 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:19.599019 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:19.599032 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:19.599044 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:19.599056 | orchestrator | 2026-03-18 01:49:19.599068 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-18 01:49:19.599081 | orchestrator | Wednesday 18 March 2026 01:49:11 +0000 (0:00:00.337) 0:05:12.160 ******* 2026-03-18 01:49:19.599093 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.599106 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.599118 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.599130 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:19.599142 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:19.599154 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:19.599166 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:19.599177 | orchestrator | 2026-03-18 01:49:19.599189 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-18 01:49:19.599201 | orchestrator | Wednesday 18 March 2026 01:49:12 +0000 (0:00:00.432) 0:05:12.592 ******* 2026-03-18 01:49:19.599213 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:19.599226 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:19.599238 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:19.599250 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:19.599262 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:19.599273 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:19.599284 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:19.599294 | orchestrator | 2026-03-18 01:49:19.599305 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-18 01:49:19.599331 | orchestrator | Wednesday 18 March 2026 01:49:12 +0000 (0:00:00.333) 0:05:12.925 ******* 2026-03-18 01:49:19.599343 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.599353 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.599364 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.599374 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:19.599385 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:19.599395 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:19.599406 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:19.599416 | orchestrator | 2026-03-18 01:49:19.599427 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-18 01:49:19.599439 | orchestrator | Wednesday 18 March 2026 01:49:13 +0000 (0:00:00.336) 0:05:13.262 ******* 2026-03-18 01:49:19.599449 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:19.599460 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:19.599471 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:19.599481 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:19.599492 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:19.599502 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:19.599513 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:19.599523 | orchestrator | 2026-03-18 01:49:19.599534 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-18 01:49:19.599546 | orchestrator | Wednesday 18 March 2026 01:49:13 +0000 (0:00:00.320) 0:05:13.582 ******* 2026-03-18 01:49:19.599556 | orchestrator | ok: [testbed-manager] =>  2026-03-18 01:49:19.599567 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599578 | orchestrator | ok: [testbed-node-3] =>  2026-03-18 01:49:19.599588 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599599 | orchestrator | ok: [testbed-node-4] =>  2026-03-18 01:49:19.599609 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599620 | orchestrator | ok: [testbed-node-5] =>  2026-03-18 01:49:19.599630 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599659 | orchestrator | ok: [testbed-node-0] =>  2026-03-18 01:49:19.599679 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599690 | orchestrator | ok: [testbed-node-1] =>  2026-03-18 01:49:19.599701 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599711 | orchestrator | ok: [testbed-node-2] =>  2026-03-18 01:49:19.599722 | orchestrator |  docker_version: 5:27.5.1 2026-03-18 01:49:19.599732 | orchestrator | 2026-03-18 01:49:19.599743 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-18 01:49:19.599753 | orchestrator | Wednesday 18 March 2026 01:49:13 +0000 (0:00:00.317) 0:05:13.900 ******* 2026-03-18 01:49:19.599764 | orchestrator | ok: [testbed-manager] =>  2026-03-18 01:49:19.599775 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599785 | orchestrator | ok: [testbed-node-3] =>  2026-03-18 01:49:19.599796 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599806 | orchestrator | ok: [testbed-node-4] =>  2026-03-18 01:49:19.599816 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599827 | orchestrator | ok: [testbed-node-5] =>  2026-03-18 01:49:19.599837 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599848 | orchestrator | ok: [testbed-node-0] =>  2026-03-18 01:49:19.599858 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599868 | orchestrator | ok: [testbed-node-1] =>  2026-03-18 01:49:19.599879 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599890 | orchestrator | ok: [testbed-node-2] =>  2026-03-18 01:49:19.599939 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-18 01:49:19.599951 | orchestrator | 2026-03-18 01:49:19.599962 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-18 01:49:19.599972 | orchestrator | Wednesday 18 March 2026 01:49:14 +0000 (0:00:00.347) 0:05:14.248 ******* 2026-03-18 01:49:19.599983 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.599994 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.600004 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.600014 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:19.600025 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:19.600035 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:19.600046 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:19.600056 | orchestrator | 2026-03-18 01:49:19.600067 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-18 01:49:19.600078 | orchestrator | Wednesday 18 March 2026 01:49:14 +0000 (0:00:00.296) 0:05:14.545 ******* 2026-03-18 01:49:19.600088 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.600099 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.600109 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.600120 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:49:19.600131 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:49:19.600141 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:49:19.600152 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:49:19.600162 | orchestrator | 2026-03-18 01:49:19.600173 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-18 01:49:19.600184 | orchestrator | Wednesday 18 March 2026 01:49:14 +0000 (0:00:00.293) 0:05:14.839 ******* 2026-03-18 01:49:19.600197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:49:19.600210 | orchestrator | 2026-03-18 01:49:19.600220 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-18 01:49:19.600231 | orchestrator | Wednesday 18 March 2026 01:49:15 +0000 (0:00:00.459) 0:05:15.298 ******* 2026-03-18 01:49:19.600242 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:19.600252 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:19.600263 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:19.600273 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:19.600284 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:19.600302 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:19.600313 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:19.600323 | orchestrator | 2026-03-18 01:49:19.600334 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-18 01:49:19.600345 | orchestrator | Wednesday 18 March 2026 01:49:16 +0000 (0:00:01.004) 0:05:16.303 ******* 2026-03-18 01:49:19.600355 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:49:19.600366 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:49:19.600376 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:49:19.600387 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:49:19.600397 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:49:19.600413 | orchestrator | ok: [testbed-manager] 2026-03-18 01:49:19.600424 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:49:19.600435 | orchestrator | 2026-03-18 01:49:19.600446 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-18 01:49:19.600457 | orchestrator | Wednesday 18 March 2026 01:49:19 +0000 (0:00:03.036) 0:05:19.340 ******* 2026-03-18 01:49:19.600468 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-18 01:49:19.600480 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-18 01:49:19.600490 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-18 01:49:19.600501 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-18 01:49:19.600511 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-18 01:49:19.600522 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-18 01:49:19.600532 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:49:19.600543 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-18 01:49:19.600554 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-18 01:49:19.600564 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-18 01:49:19.600575 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:49:19.600586 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-18 01:49:19.600596 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-18 01:49:19.600607 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-18 01:49:19.600617 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:49:19.600629 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-18 01:49:19.600657 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-18 01:50:19.912619 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-18 01:50:19.912695 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:19.912703 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-18 01:50:19.912708 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-18 01:50:19.912712 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-18 01:50:19.912716 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:19.912720 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:19.912724 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-18 01:50:19.912728 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-18 01:50:19.912732 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-18 01:50:19.912736 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:19.912740 | orchestrator | 2026-03-18 01:50:19.912745 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-18 01:50:19.912750 | orchestrator | Wednesday 18 March 2026 01:49:19 +0000 (0:00:00.667) 0:05:20.007 ******* 2026-03-18 01:50:19.912754 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.912758 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.912762 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.912765 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.912770 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.912774 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.912792 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.912796 | orchestrator | 2026-03-18 01:50:19.912800 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-18 01:50:19.912804 | orchestrator | Wednesday 18 March 2026 01:49:26 +0000 (0:00:06.647) 0:05:26.655 ******* 2026-03-18 01:50:19.912807 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.912811 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.912814 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.912818 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.912822 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.912825 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.912829 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.912832 | orchestrator | 2026-03-18 01:50:19.912836 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-18 01:50:19.912840 | orchestrator | Wednesday 18 March 2026 01:49:27 +0000 (0:00:01.085) 0:05:27.740 ******* 2026-03-18 01:50:19.912844 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.912847 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.912851 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.912854 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.912858 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.912862 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.912865 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.912869 | orchestrator | 2026-03-18 01:50:19.912873 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-18 01:50:19.912876 | orchestrator | Wednesday 18 March 2026 01:49:35 +0000 (0:00:08.304) 0:05:36.044 ******* 2026-03-18 01:50:19.912880 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.912884 | orchestrator | changed: [testbed-manager] 2026-03-18 01:50:19.912887 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.912891 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.912894 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.912898 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.912902 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.912905 | orchestrator | 2026-03-18 01:50:19.912909 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-18 01:50:19.912913 | orchestrator | Wednesday 18 March 2026 01:49:39 +0000 (0:00:03.469) 0:05:39.514 ******* 2026-03-18 01:50:19.912916 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.912920 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.912979 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.912983 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.912987 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.912991 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.912994 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.912998 | orchestrator | 2026-03-18 01:50:19.913002 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-18 01:50:19.913005 | orchestrator | Wednesday 18 March 2026 01:49:40 +0000 (0:00:01.331) 0:05:40.845 ******* 2026-03-18 01:50:19.913009 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.913014 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913017 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913021 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913025 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913028 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913032 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913036 | orchestrator | 2026-03-18 01:50:19.913040 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-18 01:50:19.913043 | orchestrator | Wednesday 18 March 2026 01:49:42 +0000 (0:00:01.526) 0:05:42.371 ******* 2026-03-18 01:50:19.913047 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:19.913051 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:19.913054 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:19.913058 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:19.913066 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:19.913070 | orchestrator | changed: [testbed-manager] 2026-03-18 01:50:19.913073 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:19.913077 | orchestrator | 2026-03-18 01:50:19.913081 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-18 01:50:19.913084 | orchestrator | Wednesday 18 March 2026 01:49:42 +0000 (0:00:00.685) 0:05:43.056 ******* 2026-03-18 01:50:19.913088 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.913092 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913095 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913099 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913103 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913106 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913110 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913114 | orchestrator | 2026-03-18 01:50:19.913117 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-18 01:50:19.913131 | orchestrator | Wednesday 18 March 2026 01:49:52 +0000 (0:00:09.389) 0:05:52.446 ******* 2026-03-18 01:50:19.913135 | orchestrator | changed: [testbed-manager] 2026-03-18 01:50:19.913139 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913142 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913146 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913150 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913153 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913157 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913161 | orchestrator | 2026-03-18 01:50:19.913164 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-18 01:50:19.913168 | orchestrator | Wednesday 18 March 2026 01:49:53 +0000 (0:00:00.911) 0:05:53.357 ******* 2026-03-18 01:50:19.913172 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.913175 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913179 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913183 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913186 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913190 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913193 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913197 | orchestrator | 2026-03-18 01:50:19.913201 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-18 01:50:19.913204 | orchestrator | Wednesday 18 March 2026 01:50:01 +0000 (0:00:08.787) 0:06:02.144 ******* 2026-03-18 01:50:19.913208 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.913212 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913215 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913219 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913223 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913226 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913230 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913233 | orchestrator | 2026-03-18 01:50:19.913237 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-18 01:50:19.913241 | orchestrator | Wednesday 18 March 2026 01:50:13 +0000 (0:00:11.236) 0:06:13.381 ******* 2026-03-18 01:50:19.913244 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-18 01:50:19.913249 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-18 01:50:19.913252 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-18 01:50:19.913256 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-18 01:50:19.913259 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-18 01:50:19.913263 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-18 01:50:19.913267 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-18 01:50:19.913270 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-18 01:50:19.913274 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-18 01:50:19.913281 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-18 01:50:19.913285 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-18 01:50:19.913323 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-18 01:50:19.913327 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-18 01:50:19.913331 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-18 01:50:19.913335 | orchestrator | 2026-03-18 01:50:19.913338 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-18 01:50:19.913342 | orchestrator | Wednesday 18 March 2026 01:50:14 +0000 (0:00:01.253) 0:06:14.635 ******* 2026-03-18 01:50:19.913346 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:19.913349 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:19.913353 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:19.913357 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:19.913360 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:19.913364 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:19.913367 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:19.913371 | orchestrator | 2026-03-18 01:50:19.913375 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-18 01:50:19.913378 | orchestrator | Wednesday 18 March 2026 01:50:15 +0000 (0:00:00.564) 0:06:15.200 ******* 2026-03-18 01:50:19.913382 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:19.913386 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:19.913389 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:19.913393 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:19.913397 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:19.913400 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:19.913406 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:19.913410 | orchestrator | 2026-03-18 01:50:19.913414 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-18 01:50:19.913419 | orchestrator | Wednesday 18 March 2026 01:50:18 +0000 (0:00:03.827) 0:06:19.027 ******* 2026-03-18 01:50:19.913422 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:19.913426 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:19.913429 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:19.913433 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:19.913437 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:19.913440 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:19.913444 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:19.913447 | orchestrator | 2026-03-18 01:50:19.913452 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-18 01:50:19.913456 | orchestrator | Wednesday 18 March 2026 01:50:19 +0000 (0:00:00.523) 0:06:19.550 ******* 2026-03-18 01:50:19.913460 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-18 01:50:19.913464 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-18 01:50:19.913467 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:19.913471 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-18 01:50:19.913474 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-18 01:50:19.913478 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:19.913482 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-18 01:50:19.913485 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-18 01:50:19.913489 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:19.913495 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-18 01:50:39.886405 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-18 01:50:39.886501 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:39.886511 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-18 01:50:39.886518 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-18 01:50:39.886526 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:39.886554 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-18 01:50:39.886561 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-18 01:50:39.886567 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:39.886574 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-18 01:50:39.886580 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-18 01:50:39.886587 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:39.886593 | orchestrator | 2026-03-18 01:50:39.886602 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-18 01:50:39.886609 | orchestrator | Wednesday 18 March 2026 01:50:20 +0000 (0:00:00.831) 0:06:20.382 ******* 2026-03-18 01:50:39.886616 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:39.886622 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:39.886629 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:39.886635 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:39.886642 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:39.886648 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:39.886654 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:39.886661 | orchestrator | 2026-03-18 01:50:39.886668 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-18 01:50:39.886675 | orchestrator | Wednesday 18 March 2026 01:50:20 +0000 (0:00:00.563) 0:06:20.945 ******* 2026-03-18 01:50:39.886681 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:39.886688 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:39.886694 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:39.886701 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:39.886707 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:39.886714 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:39.886720 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:39.886726 | orchestrator | 2026-03-18 01:50:39.886733 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-18 01:50:39.886739 | orchestrator | Wednesday 18 March 2026 01:50:21 +0000 (0:00:00.593) 0:06:21.539 ******* 2026-03-18 01:50:39.886746 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:39.886752 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:50:39.886758 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:50:39.886765 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:50:39.886771 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:50:39.886778 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:50:39.886784 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:50:39.886791 | orchestrator | 2026-03-18 01:50:39.886797 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-18 01:50:39.886804 | orchestrator | Wednesday 18 March 2026 01:50:22 +0000 (0:00:00.684) 0:06:22.223 ******* 2026-03-18 01:50:39.886810 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.886817 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.886823 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.886830 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.886836 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.886842 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.886849 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.886855 | orchestrator | 2026-03-18 01:50:39.886861 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-18 01:50:39.886868 | orchestrator | Wednesday 18 March 2026 01:50:23 +0000 (0:00:01.889) 0:06:24.113 ******* 2026-03-18 01:50:39.886876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:50:39.886885 | orchestrator | 2026-03-18 01:50:39.886891 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-18 01:50:39.886898 | orchestrator | Wednesday 18 March 2026 01:50:24 +0000 (0:00:00.948) 0:06:25.061 ******* 2026-03-18 01:50:39.886914 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.886921 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:39.886927 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:39.886951 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:39.886958 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:39.886965 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:39.886972 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:39.886979 | orchestrator | 2026-03-18 01:50:39.886987 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-18 01:50:39.886994 | orchestrator | Wednesday 18 March 2026 01:50:25 +0000 (0:00:00.845) 0:06:25.908 ******* 2026-03-18 01:50:39.887001 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887008 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:39.887015 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:39.887022 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:39.887029 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:39.887036 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:39.887044 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:39.887051 | orchestrator | 2026-03-18 01:50:39.887058 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-18 01:50:39.887065 | orchestrator | Wednesday 18 March 2026 01:50:26 +0000 (0:00:00.857) 0:06:26.765 ******* 2026-03-18 01:50:39.887072 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887079 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:39.887086 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:39.887092 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:39.887099 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:39.887105 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:39.887111 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:39.887118 | orchestrator | 2026-03-18 01:50:39.887124 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-18 01:50:39.887143 | orchestrator | Wednesday 18 March 2026 01:50:28 +0000 (0:00:01.611) 0:06:28.376 ******* 2026-03-18 01:50:39.887150 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:50:39.887157 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.887163 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.887169 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.887176 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.887183 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.887189 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.887195 | orchestrator | 2026-03-18 01:50:39.887202 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-18 01:50:39.887209 | orchestrator | Wednesday 18 March 2026 01:50:29 +0000 (0:00:01.348) 0:06:29.725 ******* 2026-03-18 01:50:39.887215 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887222 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:39.887228 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:39.887235 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:39.887241 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:39.887248 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:39.887254 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:39.887261 | orchestrator | 2026-03-18 01:50:39.887267 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-18 01:50:39.887274 | orchestrator | Wednesday 18 March 2026 01:50:30 +0000 (0:00:01.336) 0:06:31.061 ******* 2026-03-18 01:50:39.887280 | orchestrator | changed: [testbed-manager] 2026-03-18 01:50:39.887286 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:50:39.887293 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:50:39.887299 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:50:39.887306 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:50:39.887312 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:50:39.887319 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:50:39.887325 | orchestrator | 2026-03-18 01:50:39.887338 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-18 01:50:39.887344 | orchestrator | Wednesday 18 March 2026 01:50:32 +0000 (0:00:01.457) 0:06:32.519 ******* 2026-03-18 01:50:39.887351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:50:39.887358 | orchestrator | 2026-03-18 01:50:39.887364 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-18 01:50:39.887371 | orchestrator | Wednesday 18 March 2026 01:50:33 +0000 (0:00:01.088) 0:06:33.607 ******* 2026-03-18 01:50:39.887378 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.887384 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.887390 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887397 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.887403 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.887410 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.887416 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.887423 | orchestrator | 2026-03-18 01:50:39.887429 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-18 01:50:39.887436 | orchestrator | Wednesday 18 March 2026 01:50:34 +0000 (0:00:01.319) 0:06:34.927 ******* 2026-03-18 01:50:39.887442 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887449 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.887455 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.887462 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.887468 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.887475 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.887481 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.887487 | orchestrator | 2026-03-18 01:50:39.887494 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-18 01:50:39.887501 | orchestrator | Wednesday 18 March 2026 01:50:35 +0000 (0:00:01.172) 0:06:36.099 ******* 2026-03-18 01:50:39.887507 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887514 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.887520 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.887526 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.887533 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.887539 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.887545 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.887552 | orchestrator | 2026-03-18 01:50:39.887558 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-18 01:50:39.887565 | orchestrator | Wednesday 18 March 2026 01:50:37 +0000 (0:00:01.166) 0:06:37.266 ******* 2026-03-18 01:50:39.887571 | orchestrator | ok: [testbed-manager] 2026-03-18 01:50:39.887591 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:50:39.887597 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:50:39.887604 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:50:39.887611 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:50:39.887617 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:50:39.887623 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:50:39.887630 | orchestrator | 2026-03-18 01:50:39.887636 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-18 01:50:39.887643 | orchestrator | Wednesday 18 March 2026 01:50:38 +0000 (0:00:01.412) 0:06:38.679 ******* 2026-03-18 01:50:39.887649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:50:39.887656 | orchestrator | 2026-03-18 01:50:39.887662 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:50:39.887669 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:01.066) 0:06:39.745 ******* 2026-03-18 01:50:39.887675 | orchestrator | 2026-03-18 01:50:39.887682 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:50:39.887692 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.042) 0:06:39.788 ******* 2026-03-18 01:50:39.887699 | orchestrator | 2026-03-18 01:50:39.887705 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:50:39.887711 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.050) 0:06:39.838 ******* 2026-03-18 01:50:39.887718 | orchestrator | 2026-03-18 01:50:39.887724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:50:39.887734 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.043) 0:06:39.882 ******* 2026-03-18 01:51:06.018993 | orchestrator | 2026-03-18 01:51:06.019101 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:51:06.019113 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.041) 0:06:39.923 ******* 2026-03-18 01:51:06.019120 | orchestrator | 2026-03-18 01:51:06.019126 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:51:06.019132 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.048) 0:06:39.971 ******* 2026-03-18 01:51:06.019138 | orchestrator | 2026-03-18 01:51:06.019144 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-18 01:51:06.019150 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.041) 0:06:40.013 ******* 2026-03-18 01:51:06.019156 | orchestrator | 2026-03-18 01:51:06.019162 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-18 01:51:06.019168 | orchestrator | Wednesday 18 March 2026 01:50:39 +0000 (0:00:00.042) 0:06:40.056 ******* 2026-03-18 01:51:06.019174 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:06.019181 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:06.019187 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:06.019192 | orchestrator | 2026-03-18 01:51:06.019198 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-18 01:51:06.019204 | orchestrator | Wednesday 18 March 2026 01:50:41 +0000 (0:00:01.152) 0:06:41.209 ******* 2026-03-18 01:51:06.019210 | orchestrator | changed: [testbed-manager] 2026-03-18 01:51:06.019216 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:06.019222 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:06.019228 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:06.019234 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:06.019239 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:06.019245 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:06.019251 | orchestrator | 2026-03-18 01:51:06.019257 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-18 01:51:06.019263 | orchestrator | Wednesday 18 March 2026 01:50:42 +0000 (0:00:01.541) 0:06:42.750 ******* 2026-03-18 01:51:06.019268 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:06.019274 | orchestrator | changed: [testbed-manager] 2026-03-18 01:51:06.019280 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:06.019286 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:06.019291 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:06.019297 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:06.019303 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:06.019308 | orchestrator | 2026-03-18 01:51:06.019314 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-18 01:51:06.019320 | orchestrator | Wednesday 18 March 2026 01:50:43 +0000 (0:00:01.211) 0:06:43.961 ******* 2026-03-18 01:51:06.019326 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:06.019331 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:06.019337 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:06.019343 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:06.019348 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:06.019354 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:06.019360 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:06.019366 | orchestrator | 2026-03-18 01:51:06.019371 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-18 01:51:06.019381 | orchestrator | Wednesday 18 March 2026 01:50:46 +0000 (0:00:02.366) 0:06:46.327 ******* 2026-03-18 01:51:06.019418 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:06.019433 | orchestrator | 2026-03-18 01:51:06.019441 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-18 01:51:06.019449 | orchestrator | Wednesday 18 March 2026 01:50:46 +0000 (0:00:00.110) 0:06:46.438 ******* 2026-03-18 01:51:06.019458 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.019466 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:06.019475 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:06.019483 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:06.019492 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:06.019502 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:06.019511 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:06.019520 | orchestrator | 2026-03-18 01:51:06.019530 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-18 01:51:06.019541 | orchestrator | Wednesday 18 March 2026 01:50:47 +0000 (0:00:01.043) 0:06:47.482 ******* 2026-03-18 01:51:06.019552 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:06.019577 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:06.019587 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:06.019594 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:06.019601 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:06.019608 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:06.019614 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:06.019621 | orchestrator | 2026-03-18 01:51:06.019628 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-18 01:51:06.019637 | orchestrator | Wednesday 18 March 2026 01:50:47 +0000 (0:00:00.580) 0:06:48.063 ******* 2026-03-18 01:51:06.019648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:51:06.019661 | orchestrator | 2026-03-18 01:51:06.019671 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-18 01:51:06.019682 | orchestrator | Wednesday 18 March 2026 01:50:49 +0000 (0:00:01.216) 0:06:49.279 ******* 2026-03-18 01:51:06.019692 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.019701 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:06.019710 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:06.019719 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:06.019728 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:06.019737 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:06.019746 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:06.019755 | orchestrator | 2026-03-18 01:51:06.019763 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-18 01:51:06.019773 | orchestrator | Wednesday 18 March 2026 01:50:49 +0000 (0:00:00.858) 0:06:50.138 ******* 2026-03-18 01:51:06.019782 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-18 01:51:06.019810 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-18 01:51:06.019821 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-18 01:51:06.019831 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-18 01:51:06.019841 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-18 01:51:06.019852 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-18 01:51:06.019862 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-18 01:51:06.019872 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-18 01:51:06.019882 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-18 01:51:06.019891 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-18 01:51:06.019900 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-18 01:51:06.019910 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-18 01:51:06.019931 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-18 01:51:06.019941 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-18 01:51:06.020017 | orchestrator | 2026-03-18 01:51:06.020028 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-18 01:51:06.020037 | orchestrator | Wednesday 18 March 2026 01:50:52 +0000 (0:00:02.349) 0:06:52.488 ******* 2026-03-18 01:51:06.020046 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:06.020056 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:06.020065 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:06.020074 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:06.020083 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:06.020092 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:06.020101 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:06.020111 | orchestrator | 2026-03-18 01:51:06.020120 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-18 01:51:06.020130 | orchestrator | Wednesday 18 March 2026 01:50:53 +0000 (0:00:00.747) 0:06:53.236 ******* 2026-03-18 01:51:06.020142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:51:06.020154 | orchestrator | 2026-03-18 01:51:06.020164 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-18 01:51:06.020173 | orchestrator | Wednesday 18 March 2026 01:50:53 +0000 (0:00:00.866) 0:06:54.102 ******* 2026-03-18 01:51:06.020183 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.020192 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:06.020201 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:06.020210 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:06.020219 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:06.020229 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:06.020239 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:06.020249 | orchestrator | 2026-03-18 01:51:06.020258 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-18 01:51:06.020268 | orchestrator | Wednesday 18 March 2026 01:50:54 +0000 (0:00:00.845) 0:06:54.948 ******* 2026-03-18 01:51:06.020278 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.020287 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:06.020296 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:06.020305 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:06.020315 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:06.020324 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:06.020334 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:06.020343 | orchestrator | 2026-03-18 01:51:06.020353 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-18 01:51:06.020362 | orchestrator | Wednesday 18 March 2026 01:50:55 +0000 (0:00:01.083) 0:06:56.032 ******* 2026-03-18 01:51:06.020372 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:06.020381 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:06.020390 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:06.020399 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:06.020409 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:06.020418 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:06.020428 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:06.020437 | orchestrator | 2026-03-18 01:51:06.020447 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-18 01:51:06.020456 | orchestrator | Wednesday 18 March 2026 01:50:56 +0000 (0:00:00.573) 0:06:56.605 ******* 2026-03-18 01:51:06.020466 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.020476 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:06.020485 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:06.020495 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:06.020504 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:06.020524 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:06.020535 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:06.020545 | orchestrator | 2026-03-18 01:51:06.020555 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-18 01:51:06.020565 | orchestrator | Wednesday 18 March 2026 01:50:57 +0000 (0:00:01.466) 0:06:58.072 ******* 2026-03-18 01:51:06.020575 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:06.020584 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:06.020594 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:06.020603 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:06.020612 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:06.020622 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:06.020632 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:06.020641 | orchestrator | 2026-03-18 01:51:06.020651 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-18 01:51:06.020661 | orchestrator | Wednesday 18 March 2026 01:50:58 +0000 (0:00:00.582) 0:06:58.655 ******* 2026-03-18 01:51:06.020671 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:06.020680 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:06.020690 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:06.020697 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:06.020702 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:06.020708 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:06.020722 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:38.526811 | orchestrator | 2026-03-18 01:51:38.526902 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-18 01:51:38.526915 | orchestrator | Wednesday 18 March 2026 01:51:05 +0000 (0:00:07.532) 0:07:06.188 ******* 2026-03-18 01:51:38.526924 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.526933 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:38.526942 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:38.526950 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:38.526958 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:38.527024 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:38.527033 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:38.527041 | orchestrator | 2026-03-18 01:51:38.527050 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-18 01:51:38.527059 | orchestrator | Wednesday 18 March 2026 01:51:07 +0000 (0:00:01.686) 0:07:07.875 ******* 2026-03-18 01:51:38.527067 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527075 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:38.527083 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:38.527091 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:38.527099 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:38.527107 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:38.527116 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:38.527124 | orchestrator | 2026-03-18 01:51:38.527132 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-18 01:51:38.527151 | orchestrator | Wednesday 18 March 2026 01:51:09 +0000 (0:00:01.677) 0:07:09.552 ******* 2026-03-18 01:51:38.527159 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527167 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:38.527175 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:38.527183 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:38.527191 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:38.527199 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:38.527207 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:38.527215 | orchestrator | 2026-03-18 01:51:38.527223 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-18 01:51:38.527231 | orchestrator | Wednesday 18 March 2026 01:51:11 +0000 (0:00:01.771) 0:07:11.324 ******* 2026-03-18 01:51:38.527239 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527248 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.527256 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.527299 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.527307 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.527315 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.527323 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.527331 | orchestrator | 2026-03-18 01:51:38.527338 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-18 01:51:38.527347 | orchestrator | Wednesday 18 March 2026 01:51:11 +0000 (0:00:00.839) 0:07:12.164 ******* 2026-03-18 01:51:38.527356 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:38.527365 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:38.527374 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:38.527383 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:38.527392 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:38.527401 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:38.527410 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:38.527419 | orchestrator | 2026-03-18 01:51:38.527428 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-18 01:51:38.527437 | orchestrator | Wednesday 18 March 2026 01:51:13 +0000 (0:00:01.081) 0:07:13.246 ******* 2026-03-18 01:51:38.527447 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:38.527455 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:38.527464 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:38.527474 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:38.527482 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:38.527492 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:38.527499 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:38.527507 | orchestrator | 2026-03-18 01:51:38.527515 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-18 01:51:38.527523 | orchestrator | Wednesday 18 March 2026 01:51:13 +0000 (0:00:00.532) 0:07:13.779 ******* 2026-03-18 01:51:38.527531 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527554 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.527563 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.527570 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.527578 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.527586 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.527598 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.527606 | orchestrator | 2026-03-18 01:51:38.527613 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-18 01:51:38.527621 | orchestrator | Wednesday 18 March 2026 01:51:14 +0000 (0:00:00.549) 0:07:14.328 ******* 2026-03-18 01:51:38.527629 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527637 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.527644 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.527653 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.527660 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.527668 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.527676 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.527683 | orchestrator | 2026-03-18 01:51:38.527691 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-18 01:51:38.527699 | orchestrator | Wednesday 18 March 2026 01:51:14 +0000 (0:00:00.652) 0:07:14.981 ******* 2026-03-18 01:51:38.527707 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527715 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.527722 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.527730 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.527738 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.527746 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.527753 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.527761 | orchestrator | 2026-03-18 01:51:38.527768 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-18 01:51:38.527776 | orchestrator | Wednesday 18 March 2026 01:51:15 +0000 (0:00:00.880) 0:07:15.862 ******* 2026-03-18 01:51:38.527784 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.527792 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.527806 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.527814 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.527822 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.527829 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.527837 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.527844 | orchestrator | 2026-03-18 01:51:38.527867 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-18 01:51:38.527875 | orchestrator | Wednesday 18 March 2026 01:51:21 +0000 (0:00:05.760) 0:07:21.622 ******* 2026-03-18 01:51:38.527883 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:51:38.527891 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:51:38.527899 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:51:38.527907 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:51:38.527914 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:51:38.527922 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:51:38.527930 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:51:38.527938 | orchestrator | 2026-03-18 01:51:38.527945 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-18 01:51:38.527953 | orchestrator | Wednesday 18 March 2026 01:51:22 +0000 (0:00:00.581) 0:07:22.203 ******* 2026-03-18 01:51:38.527982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:51:38.527993 | orchestrator | 2026-03-18 01:51:38.528001 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-18 01:51:38.528009 | orchestrator | Wednesday 18 March 2026 01:51:23 +0000 (0:00:01.115) 0:07:23.319 ******* 2026-03-18 01:51:38.528017 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.528025 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.528033 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.528041 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.528048 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.528056 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.528064 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.528071 | orchestrator | 2026-03-18 01:51:38.528079 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-18 01:51:38.528087 | orchestrator | Wednesday 18 March 2026 01:51:24 +0000 (0:00:01.685) 0:07:25.005 ******* 2026-03-18 01:51:38.528095 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.528103 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.528111 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.528118 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.528126 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.528134 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.528142 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.528149 | orchestrator | 2026-03-18 01:51:38.528157 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-18 01:51:38.528165 | orchestrator | Wednesday 18 March 2026 01:51:25 +0000 (0:00:01.074) 0:07:26.079 ******* 2026-03-18 01:51:38.528173 | orchestrator | ok: [testbed-manager] 2026-03-18 01:51:38.528181 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:51:38.528188 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:51:38.528196 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:51:38.528204 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:51:38.528212 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:51:38.528219 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:51:38.528227 | orchestrator | 2026-03-18 01:51:38.528235 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-18 01:51:38.528243 | orchestrator | Wednesday 18 March 2026 01:51:26 +0000 (0:00:00.878) 0:07:26.957 ******* 2026-03-18 01:51:38.528251 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528260 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528273 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528281 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528293 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528301 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528309 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-18 01:51:38.528317 | orchestrator | 2026-03-18 01:51:38.528325 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-18 01:51:38.528333 | orchestrator | Wednesday 18 March 2026 01:51:28 +0000 (0:00:01.853) 0:07:28.811 ******* 2026-03-18 01:51:38.528341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:51:38.528348 | orchestrator | 2026-03-18 01:51:38.528356 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-18 01:51:38.528364 | orchestrator | Wednesday 18 March 2026 01:51:29 +0000 (0:00:00.857) 0:07:29.669 ******* 2026-03-18 01:51:38.528372 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:51:38.528380 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:51:38.528388 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:51:38.528396 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:51:38.528404 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:51:38.528411 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:51:38.528419 | orchestrator | changed: [testbed-manager] 2026-03-18 01:51:38.528427 | orchestrator | 2026-03-18 01:51:38.528440 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-18 01:52:10.204602 | orchestrator | Wednesday 18 March 2026 01:51:38 +0000 (0:00:09.030) 0:07:38.699 ******* 2026-03-18 01:52:10.204681 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:10.204689 | orchestrator | ok: [testbed-manager] 2026-03-18 01:52:10.204694 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:10.204699 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:10.204704 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:10.204710 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:10.204715 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:10.204720 | orchestrator | 2026-03-18 01:52:10.204726 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-18 01:52:10.204732 | orchestrator | Wednesday 18 March 2026 01:51:40 +0000 (0:00:02.170) 0:07:40.869 ******* 2026-03-18 01:52:10.204737 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:10.204742 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:10.204747 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:10.204752 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:10.204757 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:10.204762 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:10.204767 | orchestrator | 2026-03-18 01:52:10.204772 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-18 01:52:10.204777 | orchestrator | Wednesday 18 March 2026 01:51:41 +0000 (0:00:01.272) 0:07:42.142 ******* 2026-03-18 01:52:10.204782 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.204788 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.204793 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.204798 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.204803 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.204825 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.204831 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.204836 | orchestrator | 2026-03-18 01:52:10.204841 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-18 01:52:10.204846 | orchestrator | 2026-03-18 01:52:10.204851 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-18 01:52:10.204856 | orchestrator | Wednesday 18 March 2026 01:51:43 +0000 (0:00:01.220) 0:07:43.363 ******* 2026-03-18 01:52:10.204861 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:52:10.204866 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:52:10.204871 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:52:10.204876 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:52:10.204881 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:52:10.204886 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:52:10.204891 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:52:10.204896 | orchestrator | 2026-03-18 01:52:10.204901 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-18 01:52:10.204906 | orchestrator | 2026-03-18 01:52:10.204911 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-18 01:52:10.204916 | orchestrator | Wednesday 18 March 2026 01:51:43 +0000 (0:00:00.778) 0:07:44.141 ******* 2026-03-18 01:52:10.204921 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.204926 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.204931 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.204936 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.204941 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.204946 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.204951 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.204956 | orchestrator | 2026-03-18 01:52:10.204961 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-18 01:52:10.204966 | orchestrator | Wednesday 18 March 2026 01:51:45 +0000 (0:00:01.352) 0:07:45.493 ******* 2026-03-18 01:52:10.204971 | orchestrator | ok: [testbed-manager] 2026-03-18 01:52:10.205035 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:10.205046 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:10.205054 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:10.205063 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:10.205072 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:10.205080 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:10.205088 | orchestrator | 2026-03-18 01:52:10.205095 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-18 01:52:10.205103 | orchestrator | Wednesday 18 March 2026 01:51:46 +0000 (0:00:01.482) 0:07:46.976 ******* 2026-03-18 01:52:10.205112 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:52:10.205120 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:52:10.205129 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:52:10.205137 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:52:10.205145 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:52:10.205170 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:52:10.205180 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:52:10.205189 | orchestrator | 2026-03-18 01:52:10.205197 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-18 01:52:10.205203 | orchestrator | Wednesday 18 March 2026 01:51:47 +0000 (0:00:00.521) 0:07:47.498 ******* 2026-03-18 01:52:10.205210 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:52:10.205218 | orchestrator | 2026-03-18 01:52:10.205224 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-18 01:52:10.205230 | orchestrator | Wednesday 18 March 2026 01:51:48 +0000 (0:00:01.022) 0:07:48.521 ******* 2026-03-18 01:52:10.205237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:52:10.205252 | orchestrator | 2026-03-18 01:52:10.205258 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-18 01:52:10.205263 | orchestrator | Wednesday 18 March 2026 01:51:49 +0000 (0:00:00.833) 0:07:49.354 ******* 2026-03-18 01:52:10.205269 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205275 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205281 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205286 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205292 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205298 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205304 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205309 | orchestrator | 2026-03-18 01:52:10.205329 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-18 01:52:10.205335 | orchestrator | Wednesday 18 March 2026 01:51:58 +0000 (0:00:09.028) 0:07:58.383 ******* 2026-03-18 01:52:10.205341 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205346 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205352 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205357 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205363 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205369 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205374 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205380 | orchestrator | 2026-03-18 01:52:10.205386 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-18 01:52:10.205391 | orchestrator | Wednesday 18 March 2026 01:51:59 +0000 (0:00:01.125) 0:07:59.509 ******* 2026-03-18 01:52:10.205397 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205403 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205408 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205414 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205420 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205425 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205431 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205436 | orchestrator | 2026-03-18 01:52:10.205442 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-18 01:52:10.205448 | orchestrator | Wednesday 18 March 2026 01:52:00 +0000 (0:00:01.391) 0:08:00.901 ******* 2026-03-18 01:52:10.205454 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205460 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205465 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205471 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205476 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205482 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205488 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205493 | orchestrator | 2026-03-18 01:52:10.205499 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-18 01:52:10.205505 | orchestrator | Wednesday 18 March 2026 01:52:02 +0000 (0:00:01.937) 0:08:02.838 ******* 2026-03-18 01:52:10.205511 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205516 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205522 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205528 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205534 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205540 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205546 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205552 | orchestrator | 2026-03-18 01:52:10.205558 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-18 01:52:10.205563 | orchestrator | Wednesday 18 March 2026 01:52:03 +0000 (0:00:01.236) 0:08:04.074 ******* 2026-03-18 01:52:10.205568 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205573 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205583 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205588 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205593 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205598 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205603 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205608 | orchestrator | 2026-03-18 01:52:10.205613 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-18 01:52:10.205618 | orchestrator | 2026-03-18 01:52:10.205623 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-18 01:52:10.205628 | orchestrator | Wednesday 18 March 2026 01:52:05 +0000 (0:00:01.148) 0:08:05.222 ******* 2026-03-18 01:52:10.205633 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:52:10.205638 | orchestrator | 2026-03-18 01:52:10.205643 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-18 01:52:10.205648 | orchestrator | Wednesday 18 March 2026 01:52:05 +0000 (0:00:00.851) 0:08:06.074 ******* 2026-03-18 01:52:10.205653 | orchestrator | ok: [testbed-manager] 2026-03-18 01:52:10.205658 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:10.205663 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:10.205668 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:10.205673 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:10.205678 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:10.205687 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:10.205692 | orchestrator | 2026-03-18 01:52:10.205697 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-18 01:52:10.205702 | orchestrator | Wednesday 18 March 2026 01:52:06 +0000 (0:00:01.080) 0:08:07.155 ******* 2026-03-18 01:52:10.205707 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:10.205712 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:10.205717 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:10.205722 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:10.205727 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:10.205732 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:10.205737 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:10.205742 | orchestrator | 2026-03-18 01:52:10.205747 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-18 01:52:10.205752 | orchestrator | Wednesday 18 March 2026 01:52:08 +0000 (0:00:01.159) 0:08:08.315 ******* 2026-03-18 01:52:10.205757 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 01:52:10.205762 | orchestrator | 2026-03-18 01:52:10.205767 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-18 01:52:10.205772 | orchestrator | Wednesday 18 March 2026 01:52:09 +0000 (0:00:01.149) 0:08:09.465 ******* 2026-03-18 01:52:10.205777 | orchestrator | ok: [testbed-manager] 2026-03-18 01:52:10.205782 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:10.205787 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:10.205792 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:10.205797 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:10.205802 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:10.205807 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:10.205812 | orchestrator | 2026-03-18 01:52:10.205820 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-18 01:52:11.911533 | orchestrator | Wednesday 18 March 2026 01:52:10 +0000 (0:00:00.913) 0:08:10.378 ******* 2026-03-18 01:52:11.911615 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:11.911626 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:11.911634 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:11.911641 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:11.911647 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:11.911654 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:11.911661 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:11.911701 | orchestrator | 2026-03-18 01:52:11.911710 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:52:11.911718 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-18 01:52:11.911726 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-18 01:52:11.911733 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-18 01:52:11.911740 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-18 01:52:11.911747 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-18 01:52:11.911754 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-18 01:52:11.911760 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-18 01:52:11.911767 | orchestrator | 2026-03-18 01:52:11.911774 | orchestrator | 2026-03-18 01:52:11.911781 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:52:11.911788 | orchestrator | Wednesday 18 March 2026 01:52:11 +0000 (0:00:01.124) 0:08:11.503 ******* 2026-03-18 01:52:11.911795 | orchestrator | =============================================================================== 2026-03-18 01:52:11.911801 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.16s 2026-03-18 01:52:11.911808 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.47s 2026-03-18 01:52:11.911815 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.23s 2026-03-18 01:52:11.911821 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.23s 2026-03-18 01:52:11.911828 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.45s 2026-03-18 01:52:11.911835 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.31s 2026-03-18 01:52:11.911842 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.24s 2026-03-18 01:52:11.911849 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.39s 2026-03-18 01:52:11.911856 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.03s 2026-03-18 01:52:11.911863 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.03s 2026-03-18 01:52:11.911869 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.79s 2026-03-18 01:52:11.911876 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.35s 2026-03-18 01:52:11.911882 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.30s 2026-03-18 01:52:11.911901 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.27s 2026-03-18 01:52:11.911908 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.71s 2026-03-18 01:52:11.911915 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.53s 2026-03-18 01:52:11.911922 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.65s 2026-03-18 01:52:11.911928 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.19s 2026-03-18 01:52:11.911935 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.04s 2026-03-18 01:52:11.911942 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.76s 2026-03-18 01:52:12.282207 | orchestrator | + osism apply fail2ban 2026-03-18 01:52:25.387771 | orchestrator | 2026-03-18 01:52:25 | INFO  | Task cca84e31-0d19-4d59-8202-87400287708e (fail2ban) was prepared for execution. 2026-03-18 01:52:25.387912 | orchestrator | 2026-03-18 01:52:25 | INFO  | It takes a moment until task cca84e31-0d19-4d59-8202-87400287708e (fail2ban) has been started and output is visible here. 2026-03-18 01:52:47.917561 | orchestrator | 2026-03-18 01:52:47.917698 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-18 01:52:47.917713 | orchestrator | 2026-03-18 01:52:47.917723 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-18 01:52:47.917732 | orchestrator | Wednesday 18 March 2026 01:52:30 +0000 (0:00:00.321) 0:00:00.321 ******* 2026-03-18 01:52:47.917744 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:52:47.917757 | orchestrator | 2026-03-18 01:52:47.917766 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-18 01:52:47.917775 | orchestrator | Wednesday 18 March 2026 01:52:31 +0000 (0:00:01.232) 0:00:01.553 ******* 2026-03-18 01:52:47.917785 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:47.917795 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:47.917804 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:47.917813 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:47.917822 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:47.917831 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:47.917839 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:47.917849 | orchestrator | 2026-03-18 01:52:47.917858 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-18 01:52:47.917867 | orchestrator | Wednesday 18 March 2026 01:52:42 +0000 (0:00:11.171) 0:00:12.725 ******* 2026-03-18 01:52:47.917876 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:47.917885 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:47.917893 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:47.917902 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:47.917911 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:47.917919 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:47.917928 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:47.917937 | orchestrator | 2026-03-18 01:52:47.917946 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-18 01:52:47.917954 | orchestrator | Wednesday 18 March 2026 01:52:44 +0000 (0:00:01.498) 0:00:14.223 ******* 2026-03-18 01:52:47.917963 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:52:47.917973 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:52:47.917984 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:52:47.918122 | orchestrator | ok: [testbed-manager] 2026-03-18 01:52:47.918152 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:52:47.918174 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:52:47.918189 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:52:47.918211 | orchestrator | 2026-03-18 01:52:47.918228 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-18 01:52:47.918243 | orchestrator | Wednesday 18 March 2026 01:52:45 +0000 (0:00:01.507) 0:00:15.731 ******* 2026-03-18 01:52:47.918257 | orchestrator | changed: [testbed-manager] 2026-03-18 01:52:47.918272 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:52:47.918286 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:52:47.918301 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:52:47.918315 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:52:47.918329 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:52:47.918343 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:52:47.918358 | orchestrator | 2026-03-18 01:52:47.918372 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:52:47.918388 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918445 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918462 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918477 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918491 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918507 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918522 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:52:47.918536 | orchestrator | 2026-03-18 01:52:47.918549 | orchestrator | 2026-03-18 01:52:47.918563 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:52:47.918578 | orchestrator | Wednesday 18 March 2026 01:52:47 +0000 (0:00:01.643) 0:00:17.374 ******* 2026-03-18 01:52:47.918594 | orchestrator | =============================================================================== 2026-03-18 01:52:47.918609 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.17s 2026-03-18 01:52:47.918624 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.64s 2026-03-18 01:52:47.918639 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.51s 2026-03-18 01:52:47.918651 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-03-18 01:52:47.918660 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-03-18 01:52:48.276291 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-18 01:52:48.276394 | orchestrator | + osism apply network 2026-03-18 01:53:00.380648 | orchestrator | 2026-03-18 01:53:00 | INFO  | Task c2c2926d-696e-4cea-8155-2469ed800458 (network) was prepared for execution. 2026-03-18 01:53:00.380751 | orchestrator | 2026-03-18 01:53:00 | INFO  | It takes a moment until task c2c2926d-696e-4cea-8155-2469ed800458 (network) has been started and output is visible here. 2026-03-18 01:53:30.337270 | orchestrator | 2026-03-18 01:53:30.337367 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-18 01:53:30.337377 | orchestrator | 2026-03-18 01:53:30.337383 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-18 01:53:30.337394 | orchestrator | Wednesday 18 March 2026 01:53:04 +0000 (0:00:00.304) 0:00:00.304 ******* 2026-03-18 01:53:30.337404 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.337411 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.337417 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.337423 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.337429 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.337434 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.337440 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.337447 | orchestrator | 2026-03-18 01:53:30.337456 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-18 01:53:30.337467 | orchestrator | Wednesday 18 March 2026 01:53:05 +0000 (0:00:00.796) 0:00:01.101 ******* 2026-03-18 01:53:30.337475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:53:30.337482 | orchestrator | 2026-03-18 01:53:30.337488 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-18 01:53:30.337513 | orchestrator | Wednesday 18 March 2026 01:53:07 +0000 (0:00:01.329) 0:00:02.431 ******* 2026-03-18 01:53:30.337524 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.337530 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.337536 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.337541 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.337546 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.337552 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.337557 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.337562 | orchestrator | 2026-03-18 01:53:30.337569 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-18 01:53:30.337579 | orchestrator | Wednesday 18 March 2026 01:53:09 +0000 (0:00:02.168) 0:00:04.599 ******* 2026-03-18 01:53:30.337589 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.337594 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.337600 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.337606 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.337611 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.337616 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.337621 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.337628 | orchestrator | 2026-03-18 01:53:30.337637 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-18 01:53:30.337647 | orchestrator | Wednesday 18 March 2026 01:53:11 +0000 (0:00:01.828) 0:00:06.428 ******* 2026-03-18 01:53:30.337653 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-18 01:53:30.337659 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-18 01:53:30.337665 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-18 01:53:30.337670 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-18 01:53:30.337675 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-18 01:53:30.337680 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-18 01:53:30.337686 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-18 01:53:30.337695 | orchestrator | 2026-03-18 01:53:30.337720 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-18 01:53:30.337726 | orchestrator | Wednesday 18 March 2026 01:53:12 +0000 (0:00:01.048) 0:00:07.476 ******* 2026-03-18 01:53:30.337731 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 01:53:30.337738 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 01:53:30.337743 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 01:53:30.337751 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 01:53:30.337760 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 01:53:30.337770 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 01:53:30.337776 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 01:53:30.337781 | orchestrator | 2026-03-18 01:53:30.337787 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-18 01:53:30.337792 | orchestrator | Wednesday 18 March 2026 01:53:15 +0000 (0:00:03.691) 0:00:11.168 ******* 2026-03-18 01:53:30.337798 | orchestrator | changed: [testbed-manager] 2026-03-18 01:53:30.337803 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:53:30.337809 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:53:30.337817 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:53:30.337830 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:53:30.337837 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:53:30.337843 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:53:30.337849 | orchestrator | 2026-03-18 01:53:30.337856 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-18 01:53:30.337862 | orchestrator | Wednesday 18 March 2026 01:53:17 +0000 (0:00:01.667) 0:00:12.835 ******* 2026-03-18 01:53:30.337868 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 01:53:30.337875 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 01:53:30.337885 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 01:53:30.337895 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 01:53:30.337907 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 01:53:30.337913 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 01:53:30.337919 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 01:53:30.337924 | orchestrator | 2026-03-18 01:53:30.337931 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-18 01:53:30.337941 | orchestrator | Wednesday 18 March 2026 01:53:19 +0000 (0:00:01.809) 0:00:14.644 ******* 2026-03-18 01:53:30.337950 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.337957 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.337964 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.337970 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.337976 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.337983 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.337989 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.337996 | orchestrator | 2026-03-18 01:53:30.338006 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-18 01:53:30.338098 | orchestrator | Wednesday 18 March 2026 01:53:20 +0000 (0:00:01.143) 0:00:15.787 ******* 2026-03-18 01:53:30.338110 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:53:30.338116 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:30.338122 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:30.338127 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:30.338133 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:30.338138 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:30.338143 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:30.338150 | orchestrator | 2026-03-18 01:53:30.338159 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-18 01:53:30.338169 | orchestrator | Wednesday 18 March 2026 01:53:21 +0000 (0:00:00.712) 0:00:16.500 ******* 2026-03-18 01:53:30.338175 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.338180 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.338186 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.338191 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.338196 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.338201 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.338206 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.338214 | orchestrator | 2026-03-18 01:53:30.338223 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-18 01:53:30.338233 | orchestrator | Wednesday 18 March 2026 01:53:23 +0000 (0:00:01.979) 0:00:18.480 ******* 2026-03-18 01:53:30.338257 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:30.338270 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:30.338280 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:30.338297 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:30.338303 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:30.338308 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:30.338320 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-18 01:53:30.338328 | orchestrator | 2026-03-18 01:53:30.338336 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-18 01:53:30.338353 | orchestrator | Wednesday 18 March 2026 01:53:24 +0000 (0:00:01.007) 0:00:19.488 ******* 2026-03-18 01:53:30.338359 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.338371 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:53:30.338377 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:53:30.338382 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:53:30.338388 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:53:30.338395 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:53:30.338404 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:53:30.338414 | orchestrator | 2026-03-18 01:53:30.338420 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-18 01:53:30.338425 | orchestrator | Wednesday 18 March 2026 01:53:25 +0000 (0:00:01.635) 0:00:21.123 ******* 2026-03-18 01:53:30.338431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:53:30.338443 | orchestrator | 2026-03-18 01:53:30.338449 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-18 01:53:30.338456 | orchestrator | Wednesday 18 March 2026 01:53:27 +0000 (0:00:01.357) 0:00:22.480 ******* 2026-03-18 01:53:30.338465 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.338475 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.338481 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.338486 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.338491 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.338497 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.338502 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.338507 | orchestrator | 2026-03-18 01:53:30.338512 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-18 01:53:30.338521 | orchestrator | Wednesday 18 March 2026 01:53:28 +0000 (0:00:01.160) 0:00:23.641 ******* 2026-03-18 01:53:30.338530 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:30.338538 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:30.338544 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:30.338549 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:30.338554 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:30.338559 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:30.338564 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:30.338570 | orchestrator | 2026-03-18 01:53:30.338576 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-18 01:53:30.338585 | orchestrator | Wednesday 18 March 2026 01:53:28 +0000 (0:00:00.708) 0:00:24.350 ******* 2026-03-18 01:53:30.338600 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338606 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338611 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338616 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338622 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338627 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338632 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338639 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338648 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338658 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338664 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338669 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-18 01:53:30.338675 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338680 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-18 01:53:30.338685 | orchestrator | 2026-03-18 01:53:30.338695 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-18 01:53:48.021615 | orchestrator | Wednesday 18 March 2026 01:53:30 +0000 (0:00:01.340) 0:00:25.691 ******* 2026-03-18 01:53:48.021724 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:53:48.021741 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:48.021753 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:48.021764 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:48.021775 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:48.021786 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:48.021796 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:48.021810 | orchestrator | 2026-03-18 01:53:48.021860 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-18 01:53:48.021879 | orchestrator | Wednesday 18 March 2026 01:53:31 +0000 (0:00:00.693) 0:00:26.385 ******* 2026-03-18 01:53:48.021900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-18 01:53:48.021920 | orchestrator | 2026-03-18 01:53:48.021938 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-18 01:53:48.021956 | orchestrator | Wednesday 18 March 2026 01:53:35 +0000 (0:00:04.895) 0:00:31.281 ******* 2026-03-18 01:53:48.021976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.021995 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022145 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022201 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022215 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022247 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022334 | orchestrator | 2026-03-18 01:53:48.022349 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-18 01:53:48.022362 | orchestrator | Wednesday 18 March 2026 01:53:42 +0000 (0:00:06.167) 0:00:37.449 ******* 2026-03-18 01:53:48.022373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022407 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-18 01:53:48.022525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022537 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022567 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:48.022592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:54.939230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-18 01:53:54.939348 | orchestrator | 2026-03-18 01:53:54.939362 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-18 01:53:54.939372 | orchestrator | Wednesday 18 March 2026 01:53:47 +0000 (0:00:05.919) 0:00:43.368 ******* 2026-03-18 01:53:54.939383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:53:54.939391 | orchestrator | 2026-03-18 01:53:54.939400 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-18 01:53:54.939408 | orchestrator | Wednesday 18 March 2026 01:53:49 +0000 (0:00:01.479) 0:00:44.848 ******* 2026-03-18 01:53:54.939416 | orchestrator | ok: [testbed-manager] 2026-03-18 01:53:54.939425 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:53:54.939433 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:53:54.939441 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:53:54.939449 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:53:54.939456 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:53:54.939464 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:53:54.939472 | orchestrator | 2026-03-18 01:53:54.939480 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-18 01:53:54.939488 | orchestrator | Wednesday 18 March 2026 01:53:50 +0000 (0:00:01.242) 0:00:46.090 ******* 2026-03-18 01:53:54.939496 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939505 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939513 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939521 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939529 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939537 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939545 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939553 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939561 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:53:54.939569 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939577 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939585 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939593 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939601 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:54.939629 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939638 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939646 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939653 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:54.939661 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939684 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939692 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939700 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939708 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939716 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:54.939723 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939731 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939739 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939747 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939755 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:54.939762 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:54.939770 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-18 01:53:54.939780 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-18 01:53:54.939789 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-18 01:53:54.939798 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-18 01:53:54.939807 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:54.939815 | orchestrator | 2026-03-18 01:53:54.939824 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-18 01:53:54.939849 | orchestrator | Wednesday 18 March 2026 01:53:52 +0000 (0:00:02.268) 0:00:48.358 ******* 2026-03-18 01:53:54.939858 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:53:54.939867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:54.939876 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:54.939884 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:54.939893 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:54.939902 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:54.939911 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:54.939920 | orchestrator | 2026-03-18 01:53:54.939929 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-18 01:53:54.939938 | orchestrator | Wednesday 18 March 2026 01:53:53 +0000 (0:00:00.669) 0:00:49.028 ******* 2026-03-18 01:53:54.939947 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:53:54.939956 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:53:54.939964 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:53:54.939974 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:53:54.939983 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:53:54.939991 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:53:54.940001 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:53:54.940009 | orchestrator | 2026-03-18 01:53:54.940018 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:53:54.940051 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 01:53:54.940062 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940078 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940087 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940095 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940102 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940110 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 01:53:54.940118 | orchestrator | 2026-03-18 01:53:54.940126 | orchestrator | 2026-03-18 01:53:54.940134 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:53:54.940142 | orchestrator | Wednesday 18 March 2026 01:53:54 +0000 (0:00:00.817) 0:00:49.845 ******* 2026-03-18 01:53:54.940150 | orchestrator | =============================================================================== 2026-03-18 01:53:54.940158 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.17s 2026-03-18 01:53:54.940166 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.92s 2026-03-18 01:53:54.940174 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.90s 2026-03-18 01:53:54.940181 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.69s 2026-03-18 01:53:54.940189 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.27s 2026-03-18 01:53:54.940197 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.17s 2026-03-18 01:53:54.940205 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.98s 2026-03-18 01:53:54.940217 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.83s 2026-03-18 01:53:54.940225 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.81s 2026-03-18 01:53:54.940233 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-18 01:53:54.940241 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.64s 2026-03-18 01:53:54.940249 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.48s 2026-03-18 01:53:54.940257 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2026-03-18 01:53:54.940265 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.34s 2026-03-18 01:53:54.940273 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.33s 2026-03-18 01:53:54.940281 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2026-03-18 01:53:54.940288 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-03-18 01:53:54.940296 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-03-18 01:53:54.940304 | orchestrator | osism.commons.network : Create required directories --------------------- 1.05s 2026-03-18 01:53:54.940312 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.01s 2026-03-18 01:53:55.321149 | orchestrator | + osism apply wireguard 2026-03-18 01:54:07.499109 | orchestrator | 2026-03-18 01:54:07 | INFO  | Task d8aa8360-9a9e-4e4f-88a3-54b21092c457 (wireguard) was prepared for execution. 2026-03-18 01:54:07.499230 | orchestrator | 2026-03-18 01:54:07 | INFO  | It takes a moment until task d8aa8360-9a9e-4e4f-88a3-54b21092c457 (wireguard) has been started and output is visible here. 2026-03-18 01:54:29.299161 | orchestrator | 2026-03-18 01:54:29.299292 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-18 01:54:29.299357 | orchestrator | 2026-03-18 01:54:29.299377 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-18 01:54:29.299392 | orchestrator | Wednesday 18 March 2026 01:54:12 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-03-18 01:54:29.299408 | orchestrator | ok: [testbed-manager] 2026-03-18 01:54:29.299426 | orchestrator | 2026-03-18 01:54:29.299444 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-18 01:54:29.299458 | orchestrator | Wednesday 18 March 2026 01:54:13 +0000 (0:00:01.648) 0:00:01.881 ******* 2026-03-18 01:54:29.299472 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.299494 | orchestrator | 2026-03-18 01:54:29.299511 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-18 01:54:29.299527 | orchestrator | Wednesday 18 March 2026 01:54:20 +0000 (0:00:07.299) 0:00:09.181 ******* 2026-03-18 01:54:29.299544 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.299561 | orchestrator | 2026-03-18 01:54:29.299576 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-18 01:54:29.299590 | orchestrator | Wednesday 18 March 2026 01:54:21 +0000 (0:00:00.613) 0:00:09.794 ******* 2026-03-18 01:54:29.299606 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.299622 | orchestrator | 2026-03-18 01:54:29.299638 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-18 01:54:29.299653 | orchestrator | Wednesday 18 March 2026 01:54:22 +0000 (0:00:00.454) 0:00:10.249 ******* 2026-03-18 01:54:29.299670 | orchestrator | ok: [testbed-manager] 2026-03-18 01:54:29.299685 | orchestrator | 2026-03-18 01:54:29.299701 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-18 01:54:29.299716 | orchestrator | Wednesday 18 March 2026 01:54:22 +0000 (0:00:00.720) 0:00:10.969 ******* 2026-03-18 01:54:29.299732 | orchestrator | ok: [testbed-manager] 2026-03-18 01:54:29.299747 | orchestrator | 2026-03-18 01:54:29.299762 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-18 01:54:29.299776 | orchestrator | Wednesday 18 March 2026 01:54:23 +0000 (0:00:00.419) 0:00:11.389 ******* 2026-03-18 01:54:29.299792 | orchestrator | ok: [testbed-manager] 2026-03-18 01:54:29.299808 | orchestrator | 2026-03-18 01:54:29.299824 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-18 01:54:29.299840 | orchestrator | Wednesday 18 March 2026 01:54:23 +0000 (0:00:00.452) 0:00:11.842 ******* 2026-03-18 01:54:29.299856 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.299871 | orchestrator | 2026-03-18 01:54:29.299884 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-18 01:54:29.299900 | orchestrator | Wednesday 18 March 2026 01:54:24 +0000 (0:00:01.269) 0:00:13.112 ******* 2026-03-18 01:54:29.299938 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-18 01:54:29.299958 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.299974 | orchestrator | 2026-03-18 01:54:29.300005 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-18 01:54:29.300021 | orchestrator | Wednesday 18 March 2026 01:54:25 +0000 (0:00:01.054) 0:00:14.166 ******* 2026-03-18 01:54:29.300036 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.300131 | orchestrator | 2026-03-18 01:54:29.300148 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-18 01:54:29.300164 | orchestrator | Wednesday 18 March 2026 01:54:27 +0000 (0:00:01.872) 0:00:16.039 ******* 2026-03-18 01:54:29.300181 | orchestrator | changed: [testbed-manager] 2026-03-18 01:54:29.300196 | orchestrator | 2026-03-18 01:54:29.300211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:54:29.300228 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:54:29.300247 | orchestrator | 2026-03-18 01:54:29.300263 | orchestrator | 2026-03-18 01:54:29.300280 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:54:29.300316 | orchestrator | Wednesday 18 March 2026 01:54:28 +0000 (0:00:00.984) 0:00:17.024 ******* 2026-03-18 01:54:29.300333 | orchestrator | =============================================================================== 2026-03-18 01:54:29.300349 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.30s 2026-03-18 01:54:29.300366 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.87s 2026-03-18 01:54:29.300382 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.65s 2026-03-18 01:54:29.300397 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.27s 2026-03-18 01:54:29.300413 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.05s 2026-03-18 01:54:29.300426 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-03-18 01:54:29.300439 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2026-03-18 01:54:29.300452 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.61s 2026-03-18 01:54:29.300464 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-03-18 01:54:29.300477 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-03-18 01:54:29.300489 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-18 01:54:29.648727 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-18 01:54:29.692763 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-18 01:54:29.692852 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-18 01:54:29.767878 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2026-03-18 01:54:29.784163 | orchestrator | + osism apply --environment custom workarounds 2026-03-18 01:54:31.841495 | orchestrator | 2026-03-18 01:54:31 | INFO  | Trying to run play workarounds in environment custom 2026-03-18 01:54:41.976261 | orchestrator | 2026-03-18 01:54:41 | INFO  | Task ff5e3de4-623b-4119-a5f5-1928627e35c6 (workarounds) was prepared for execution. 2026-03-18 01:54:41.976399 | orchestrator | 2026-03-18 01:54:41 | INFO  | It takes a moment until task ff5e3de4-623b-4119-a5f5-1928627e35c6 (workarounds) has been started and output is visible here. 2026-03-18 01:55:08.507676 | orchestrator | 2026-03-18 01:55:08.507787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 01:55:08.507802 | orchestrator | 2026-03-18 01:55:08.507813 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-18 01:55:08.507824 | orchestrator | Wednesday 18 March 2026 01:54:46 +0000 (0:00:00.128) 0:00:00.128 ******* 2026-03-18 01:55:08.507834 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507845 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507854 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507864 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507874 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507883 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507893 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-18 01:55:08.507903 | orchestrator | 2026-03-18 01:55:08.507912 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-18 01:55:08.507922 | orchestrator | 2026-03-18 01:55:08.507932 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-18 01:55:08.507941 | orchestrator | Wednesday 18 March 2026 01:54:47 +0000 (0:00:00.888) 0:00:01.017 ******* 2026-03-18 01:55:08.507951 | orchestrator | ok: [testbed-manager] 2026-03-18 01:55:08.507980 | orchestrator | 2026-03-18 01:55:08.507991 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-18 01:55:08.508000 | orchestrator | 2026-03-18 01:55:08.508010 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-18 01:55:08.508038 | orchestrator | Wednesday 18 March 2026 01:54:50 +0000 (0:00:02.814) 0:00:03.831 ******* 2026-03-18 01:55:08.508048 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:55:08.508084 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:55:08.508094 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:55:08.508103 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:55:08.508113 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:55:08.508122 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:55:08.508132 | orchestrator | 2026-03-18 01:55:08.508142 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-18 01:55:08.508151 | orchestrator | 2026-03-18 01:55:08.508161 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-18 01:55:08.508170 | orchestrator | Wednesday 18 March 2026 01:54:52 +0000 (0:00:01.989) 0:00:05.821 ******* 2026-03-18 01:55:08.508181 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508192 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508202 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508211 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508221 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508238 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-18 01:55:08.508249 | orchestrator | 2026-03-18 01:55:08.508258 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-18 01:55:08.508268 | orchestrator | Wednesday 18 March 2026 01:54:53 +0000 (0:00:01.559) 0:00:07.380 ******* 2026-03-18 01:55:08.508278 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:55:08.508288 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:55:08.508298 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:55:08.508307 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:55:08.508317 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:55:08.508326 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:55:08.508336 | orchestrator | 2026-03-18 01:55:08.508345 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-18 01:55:08.508358 | orchestrator | Wednesday 18 March 2026 01:54:57 +0000 (0:00:03.712) 0:00:11.093 ******* 2026-03-18 01:55:08.508374 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:55:08.508391 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:55:08.508406 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:55:08.508422 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:55:08.508438 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:55:08.508453 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:55:08.508468 | orchestrator | 2026-03-18 01:55:08.508481 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-18 01:55:08.508495 | orchestrator | 2026-03-18 01:55:08.508510 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-18 01:55:08.508524 | orchestrator | Wednesday 18 March 2026 01:54:58 +0000 (0:00:00.763) 0:00:11.856 ******* 2026-03-18 01:55:08.508538 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:55:08.508553 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:55:08.508568 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:55:08.508584 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:55:08.508599 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:55:08.508614 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:55:08.508643 | orchestrator | changed: [testbed-manager] 2026-03-18 01:55:08.508660 | orchestrator | 2026-03-18 01:55:08.508676 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-18 01:55:08.508693 | orchestrator | Wednesday 18 March 2026 01:54:59 +0000 (0:00:01.606) 0:00:13.462 ******* 2026-03-18 01:55:08.508709 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:55:08.508725 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:55:08.508741 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:55:08.508759 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:55:08.508777 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:55:08.508787 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:55:08.508816 | orchestrator | changed: [testbed-manager] 2026-03-18 01:55:08.508826 | orchestrator | 2026-03-18 01:55:08.508836 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-18 01:55:08.508846 | orchestrator | Wednesday 18 March 2026 01:55:01 +0000 (0:00:01.670) 0:00:15.133 ******* 2026-03-18 01:55:08.508856 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:55:08.508866 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:55:08.508875 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:55:08.508885 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:55:08.508894 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:55:08.508904 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:55:08.508913 | orchestrator | ok: [testbed-manager] 2026-03-18 01:55:08.508923 | orchestrator | 2026-03-18 01:55:08.508933 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-18 01:55:08.508942 | orchestrator | Wednesday 18 March 2026 01:55:02 +0000 (0:00:01.655) 0:00:16.788 ******* 2026-03-18 01:55:08.508952 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:55:08.508961 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:55:08.508971 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:55:08.508980 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:55:08.508990 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:55:08.508999 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:55:08.509009 | orchestrator | changed: [testbed-manager] 2026-03-18 01:55:08.509018 | orchestrator | 2026-03-18 01:55:08.509028 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-18 01:55:08.509037 | orchestrator | Wednesday 18 March 2026 01:55:04 +0000 (0:00:01.963) 0:00:18.752 ******* 2026-03-18 01:55:08.509047 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:55:08.509057 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:55:08.509092 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:55:08.509102 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:55:08.509112 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:55:08.509121 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:55:08.509131 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:55:08.509140 | orchestrator | 2026-03-18 01:55:08.509150 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-18 01:55:08.509160 | orchestrator | 2026-03-18 01:55:08.509169 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-18 01:55:08.509179 | orchestrator | Wednesday 18 March 2026 01:55:05 +0000 (0:00:00.656) 0:00:19.408 ******* 2026-03-18 01:55:08.509188 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:55:08.509198 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:55:08.509207 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:55:08.509217 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:55:08.509226 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:55:08.509236 | orchestrator | ok: [testbed-manager] 2026-03-18 01:55:08.509245 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:55:08.509255 | orchestrator | 2026-03-18 01:55:08.509264 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:55:08.509276 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:55:08.509287 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509305 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509321 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509332 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509341 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509351 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:08.509360 | orchestrator | 2026-03-18 01:55:08.509370 | orchestrator | 2026-03-18 01:55:08.509380 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:55:08.509390 | orchestrator | Wednesday 18 March 2026 01:55:08 +0000 (0:00:02.880) 0:00:22.288 ******* 2026-03-18 01:55:08.509399 | orchestrator | =============================================================================== 2026-03-18 01:55:08.509409 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2026-03-18 01:55:08.509419 | orchestrator | Install python3-docker -------------------------------------------------- 2.88s 2026-03-18 01:55:08.509429 | orchestrator | Apply netplan configuration --------------------------------------------- 2.81s 2026-03-18 01:55:08.509438 | orchestrator | Apply netplan configuration --------------------------------------------- 1.99s 2026-03-18 01:55:08.509448 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.96s 2026-03-18 01:55:08.509457 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.67s 2026-03-18 01:55:08.509467 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.66s 2026-03-18 01:55:08.509476 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.61s 2026-03-18 01:55:08.509486 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2026-03-18 01:55:08.509495 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.89s 2026-03-18 01:55:08.509505 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.76s 2026-03-18 01:55:08.509522 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-03-18 01:55:09.264834 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-18 01:55:21.549585 | orchestrator | 2026-03-18 01:55:21 | INFO  | Task a3de1a2f-f683-49bd-8fb3-1de3dfc922aa (reboot) was prepared for execution. 2026-03-18 01:55:21.549715 | orchestrator | 2026-03-18 01:55:21 | INFO  | It takes a moment until task a3de1a2f-f683-49bd-8fb3-1de3dfc922aa (reboot) has been started and output is visible here. 2026-03-18 01:55:32.265803 | orchestrator | 2026-03-18 01:55:32.265892 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.265899 | orchestrator | 2026-03-18 01:55:32.265904 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.265909 | orchestrator | Wednesday 18 March 2026 01:55:26 +0000 (0:00:00.222) 0:00:00.222 ******* 2026-03-18 01:55:32.265913 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:55:32.265918 | orchestrator | 2026-03-18 01:55:32.265922 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.265926 | orchestrator | Wednesday 18 March 2026 01:55:26 +0000 (0:00:00.114) 0:00:00.336 ******* 2026-03-18 01:55:32.265930 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:55:32.265934 | orchestrator | 2026-03-18 01:55:32.265938 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.265954 | orchestrator | Wednesday 18 March 2026 01:55:27 +0000 (0:00:00.939) 0:00:01.275 ******* 2026-03-18 01:55:32.265958 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:55:32.265961 | orchestrator | 2026-03-18 01:55:32.265965 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.265970 | orchestrator | 2026-03-18 01:55:32.266002 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.266007 | orchestrator | Wednesday 18 March 2026 01:55:27 +0000 (0:00:00.129) 0:00:01.405 ******* 2026-03-18 01:55:32.266011 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:55:32.266049 | orchestrator | 2026-03-18 01:55:32.266054 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.266058 | orchestrator | Wednesday 18 March 2026 01:55:27 +0000 (0:00:00.117) 0:00:01.522 ******* 2026-03-18 01:55:32.266062 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:55:32.266066 | orchestrator | 2026-03-18 01:55:32.266070 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.266101 | orchestrator | Wednesday 18 March 2026 01:55:28 +0000 (0:00:00.720) 0:00:02.243 ******* 2026-03-18 01:55:32.266105 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:55:32.266109 | orchestrator | 2026-03-18 01:55:32.266113 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.266117 | orchestrator | 2026-03-18 01:55:32.266121 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.266125 | orchestrator | Wednesday 18 March 2026 01:55:28 +0000 (0:00:00.120) 0:00:02.363 ******* 2026-03-18 01:55:32.266129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:55:32.266133 | orchestrator | 2026-03-18 01:55:32.266137 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.266141 | orchestrator | Wednesday 18 March 2026 01:55:28 +0000 (0:00:00.235) 0:00:02.598 ******* 2026-03-18 01:55:32.266145 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:55:32.266150 | orchestrator | 2026-03-18 01:55:32.266159 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.266163 | orchestrator | Wednesday 18 March 2026 01:55:29 +0000 (0:00:00.676) 0:00:03.274 ******* 2026-03-18 01:55:32.266167 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:55:32.266171 | orchestrator | 2026-03-18 01:55:32.266175 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.266179 | orchestrator | 2026-03-18 01:55:32.266183 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.266187 | orchestrator | Wednesday 18 March 2026 01:55:29 +0000 (0:00:00.129) 0:00:03.404 ******* 2026-03-18 01:55:32.266190 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:55:32.266194 | orchestrator | 2026-03-18 01:55:32.266198 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.266202 | orchestrator | Wednesday 18 March 2026 01:55:29 +0000 (0:00:00.102) 0:00:03.506 ******* 2026-03-18 01:55:32.266206 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:55:32.266210 | orchestrator | 2026-03-18 01:55:32.266214 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.266217 | orchestrator | Wednesday 18 March 2026 01:55:30 +0000 (0:00:00.679) 0:00:04.186 ******* 2026-03-18 01:55:32.266221 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:55:32.266225 | orchestrator | 2026-03-18 01:55:32.266229 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.266233 | orchestrator | 2026-03-18 01:55:32.266237 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.266241 | orchestrator | Wednesday 18 March 2026 01:55:30 +0000 (0:00:00.118) 0:00:04.304 ******* 2026-03-18 01:55:32.266245 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:55:32.266249 | orchestrator | 2026-03-18 01:55:32.266252 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.266261 | orchestrator | Wednesday 18 March 2026 01:55:30 +0000 (0:00:00.118) 0:00:04.423 ******* 2026-03-18 01:55:32.266265 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:55:32.266269 | orchestrator | 2026-03-18 01:55:32.266272 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.266276 | orchestrator | Wednesday 18 March 2026 01:55:30 +0000 (0:00:00.656) 0:00:05.079 ******* 2026-03-18 01:55:32.266280 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:55:32.266285 | orchestrator | 2026-03-18 01:55:32.266289 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-18 01:55:32.266293 | orchestrator | 2026-03-18 01:55:32.266296 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-18 01:55:32.266300 | orchestrator | Wednesday 18 March 2026 01:55:31 +0000 (0:00:00.118) 0:00:05.198 ******* 2026-03-18 01:55:32.266304 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:55:32.266308 | orchestrator | 2026-03-18 01:55:32.266312 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-18 01:55:32.266316 | orchestrator | Wednesday 18 March 2026 01:55:31 +0000 (0:00:00.100) 0:00:05.299 ******* 2026-03-18 01:55:32.266320 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:55:32.266324 | orchestrator | 2026-03-18 01:55:32.266327 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-18 01:55:32.266331 | orchestrator | Wednesday 18 March 2026 01:55:31 +0000 (0:00:00.639) 0:00:05.938 ******* 2026-03-18 01:55:32.266347 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:55:32.266351 | orchestrator | 2026-03-18 01:55:32.266355 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:55:32.266360 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266365 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266369 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266373 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266376 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266380 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 01:55:32.266384 | orchestrator | 2026-03-18 01:55:32.266388 | orchestrator | 2026-03-18 01:55:32.266393 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:55:32.266397 | orchestrator | Wednesday 18 March 2026 01:55:31 +0000 (0:00:00.047) 0:00:05.985 ******* 2026-03-18 01:55:32.266401 | orchestrator | =============================================================================== 2026-03-18 01:55:32.266405 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-03-18 01:55:32.266409 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.79s 2026-03-18 01:55:32.266414 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2026-03-18 01:55:32.633972 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-18 01:55:44.835647 | orchestrator | 2026-03-18 01:55:44 | INFO  | Task c91eae45-7149-46f5-a3ac-55764bb60f4f (wait-for-connection) was prepared for execution. 2026-03-18 01:55:44.835794 | orchestrator | 2026-03-18 01:55:44 | INFO  | It takes a moment until task c91eae45-7149-46f5-a3ac-55764bb60f4f (wait-for-connection) has been started and output is visible here. 2026-03-18 01:56:01.269760 | orchestrator | 2026-03-18 01:56:01.269868 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-18 01:56:01.269884 | orchestrator | 2026-03-18 01:56:01.269896 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-18 01:56:01.269908 | orchestrator | Wednesday 18 March 2026 01:55:49 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-03-18 01:56:01.269920 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:56:01.269932 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:56:01.269943 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:56:01.269954 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:56:01.269983 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:56:01.269995 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:56:01.270073 | orchestrator | 2026-03-18 01:56:01.270135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:56:01.270148 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270161 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270172 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270183 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270194 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270205 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:01.270216 | orchestrator | 2026-03-18 01:56:01.270227 | orchestrator | 2026-03-18 01:56:01.270239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:56:01.270250 | orchestrator | Wednesday 18 March 2026 01:56:00 +0000 (0:00:11.538) 0:00:11.782 ******* 2026-03-18 01:56:01.270260 | orchestrator | =============================================================================== 2026-03-18 01:56:01.270271 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-03-18 01:56:01.635565 | orchestrator | + osism apply hddtemp 2026-03-18 01:56:13.863692 | orchestrator | 2026-03-18 01:56:13 | INFO  | Task 63b3a267-d56f-4199-b67d-30575bb06408 (hddtemp) was prepared for execution. 2026-03-18 01:56:13.863805 | orchestrator | 2026-03-18 01:56:13 | INFO  | It takes a moment until task 63b3a267-d56f-4199-b67d-30575bb06408 (hddtemp) has been started and output is visible here. 2026-03-18 01:56:42.368435 | orchestrator | 2026-03-18 01:56:42.368515 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-18 01:56:42.368523 | orchestrator | 2026-03-18 01:56:42.368530 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-18 01:56:42.368536 | orchestrator | Wednesday 18 March 2026 01:56:18 +0000 (0:00:00.304) 0:00:00.304 ******* 2026-03-18 01:56:42.368541 | orchestrator | ok: [testbed-manager] 2026-03-18 01:56:42.368548 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:56:42.368554 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:56:42.368560 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:56:42.368565 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:56:42.368571 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:56:42.368576 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:56:42.368581 | orchestrator | 2026-03-18 01:56:42.368587 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-18 01:56:42.368592 | orchestrator | Wednesday 18 March 2026 01:56:19 +0000 (0:00:00.801) 0:00:01.105 ******* 2026-03-18 01:56:42.368600 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:56:42.368625 | orchestrator | 2026-03-18 01:56:42.368631 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-18 01:56:42.368636 | orchestrator | Wednesday 18 March 2026 01:56:20 +0000 (0:00:01.258) 0:00:02.364 ******* 2026-03-18 01:56:42.368641 | orchestrator | ok: [testbed-manager] 2026-03-18 01:56:42.368647 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:56:42.368652 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:56:42.368657 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:56:42.368663 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:56:42.368669 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:56:42.368674 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:56:42.368679 | orchestrator | 2026-03-18 01:56:42.368685 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-18 01:56:42.368690 | orchestrator | Wednesday 18 March 2026 01:56:22 +0000 (0:00:01.932) 0:00:04.297 ******* 2026-03-18 01:56:42.368699 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:56:42.368709 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:56:42.368723 | orchestrator | changed: [testbed-manager] 2026-03-18 01:56:42.368733 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:56:42.368741 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:56:42.368749 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:56:42.368758 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:56:42.368766 | orchestrator | 2026-03-18 01:56:42.368775 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-18 01:56:42.368784 | orchestrator | Wednesday 18 March 2026 01:56:23 +0000 (0:00:01.211) 0:00:05.509 ******* 2026-03-18 01:56:42.368792 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:56:42.368802 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:56:42.368810 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:56:42.368820 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:56:42.368826 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:56:42.368843 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:56:42.368848 | orchestrator | ok: [testbed-manager] 2026-03-18 01:56:42.368854 | orchestrator | 2026-03-18 01:56:42.368859 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-18 01:56:42.368865 | orchestrator | Wednesday 18 March 2026 01:56:24 +0000 (0:00:01.208) 0:00:06.717 ******* 2026-03-18 01:56:42.368870 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:56:42.368875 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:56:42.368880 | orchestrator | changed: [testbed-manager] 2026-03-18 01:56:42.368886 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:56:42.368891 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:56:42.368896 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:56:42.368901 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:56:42.368907 | orchestrator | 2026-03-18 01:56:42.368912 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-18 01:56:42.368917 | orchestrator | Wednesday 18 March 2026 01:56:25 +0000 (0:00:00.931) 0:00:07.649 ******* 2026-03-18 01:56:42.368923 | orchestrator | changed: [testbed-manager] 2026-03-18 01:56:42.368928 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:56:42.368933 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:56:42.368938 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:56:42.368944 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:56:42.368949 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:56:42.368954 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:56:42.368959 | orchestrator | 2026-03-18 01:56:42.368965 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-18 01:56:42.368970 | orchestrator | Wednesday 18 March 2026 01:56:38 +0000 (0:00:12.234) 0:00:19.884 ******* 2026-03-18 01:56:42.368976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 01:56:42.368987 | orchestrator | 2026-03-18 01:56:42.368993 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-18 01:56:42.368998 | orchestrator | Wednesday 18 March 2026 01:56:39 +0000 (0:00:01.550) 0:00:21.435 ******* 2026-03-18 01:56:42.369004 | orchestrator | changed: [testbed-node-0] 2026-03-18 01:56:42.369011 | orchestrator | changed: [testbed-node-2] 2026-03-18 01:56:42.369017 | orchestrator | changed: [testbed-node-1] 2026-03-18 01:56:42.369023 | orchestrator | changed: [testbed-node-3] 2026-03-18 01:56:42.369029 | orchestrator | changed: [testbed-manager] 2026-03-18 01:56:42.369035 | orchestrator | changed: [testbed-node-4] 2026-03-18 01:56:42.369041 | orchestrator | changed: [testbed-node-5] 2026-03-18 01:56:42.369047 | orchestrator | 2026-03-18 01:56:42.369053 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:56:42.369060 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 01:56:42.369082 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369089 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369119 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369125 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369131 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369138 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:56:42.369144 | orchestrator | 2026-03-18 01:56:42.369150 | orchestrator | 2026-03-18 01:56:42.369156 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:56:42.369162 | orchestrator | Wednesday 18 March 2026 01:56:41 +0000 (0:00:02.299) 0:00:23.735 ******* 2026-03-18 01:56:42.369169 | orchestrator | =============================================================================== 2026-03-18 01:56:42.369175 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.23s 2026-03-18 01:56:42.369180 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.30s 2026-03-18 01:56:42.369186 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.93s 2026-03-18 01:56:42.369191 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.55s 2026-03-18 01:56:42.369196 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.26s 2026-03-18 01:56:42.369202 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.21s 2026-03-18 01:56:42.369207 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-03-18 01:56:42.369212 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.93s 2026-03-18 01:56:42.369217 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.80s 2026-03-18 01:56:42.719828 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-18 01:56:42.774829 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 01:56:42.774914 | orchestrator | + sudo systemctl restart manager.service 2026-03-18 01:56:56.646517 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 01:56:56.646610 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-18 01:56:56.646638 | orchestrator | + local max_attempts=60 2026-03-18 01:56:56.646647 | orchestrator | + local name=ceph-ansible 2026-03-18 01:56:56.646656 | orchestrator | + local attempt_num=1 2026-03-18 01:56:56.646663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:56:56.682725 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:56:56.683321 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:56:56.683346 | orchestrator | + sleep 5 2026-03-18 01:57:01.687957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:01.725691 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:01.725776 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:01.725787 | orchestrator | + sleep 5 2026-03-18 01:57:06.726451 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:06.766328 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:06.766438 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:06.766455 | orchestrator | + sleep 5 2026-03-18 01:57:11.770290 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:11.810734 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:11.810834 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:11.810850 | orchestrator | + sleep 5 2026-03-18 01:57:16.814260 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:16.855340 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:16.855434 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:16.855449 | orchestrator | + sleep 5 2026-03-18 01:57:21.860629 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:21.897414 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:21.897544 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:21.897613 | orchestrator | + sleep 5 2026-03-18 01:57:26.902150 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:26.947743 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:26.947862 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:26.947881 | orchestrator | + sleep 5 2026-03-18 01:57:31.955096 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:32.004923 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:32.005013 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:32.005023 | orchestrator | + sleep 5 2026-03-18 01:57:37.008662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:37.054742 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:37.054822 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:37.054837 | orchestrator | + sleep 5 2026-03-18 01:57:42.059401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:42.104585 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:42.104735 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:42.104764 | orchestrator | + sleep 5 2026-03-18 01:57:47.108911 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:47.143779 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:47.143887 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:47.143903 | orchestrator | + sleep 5 2026-03-18 01:57:52.149644 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:52.186163 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:52.186247 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:52.186259 | orchestrator | + sleep 5 2026-03-18 01:57:57.190588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:57:57.237046 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-18 01:57:57.237190 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-18 01:57:57.237206 | orchestrator | + sleep 5 2026-03-18 01:58:02.241340 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-18 01:58:02.279030 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:58:02.279111 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-18 01:58:02.279143 | orchestrator | + local max_attempts=60 2026-03-18 01:58:02.279153 | orchestrator | + local name=kolla-ansible 2026-03-18 01:58:02.279161 | orchestrator | + local attempt_num=1 2026-03-18 01:58:02.280177 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-18 01:58:02.321626 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:58:02.321716 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-18 01:58:02.321755 | orchestrator | + local max_attempts=60 2026-03-18 01:58:02.321854 | orchestrator | + local name=osism-ansible 2026-03-18 01:58:02.321870 | orchestrator | + local attempt_num=1 2026-03-18 01:58:02.321891 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-18 01:58:02.354839 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 01:58:02.354943 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-18 01:58:02.354957 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-18 01:58:02.526443 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-18 01:58:02.713614 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-18 01:58:02.903034 | orchestrator | ARA in osism-ansible already disabled. 2026-03-18 01:58:03.073656 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-18 01:58:03.074512 | orchestrator | + osism apply gather-facts 2026-03-18 01:58:15.402629 | orchestrator | 2026-03-18 01:58:15 | INFO  | Task 549e9b4e-5a82-4f1b-8dac-71b75baeb615 (gather-facts) was prepared for execution. 2026-03-18 01:58:15.402710 | orchestrator | 2026-03-18 01:58:15 | INFO  | It takes a moment until task 549e9b4e-5a82-4f1b-8dac-71b75baeb615 (gather-facts) has been started and output is visible here. 2026-03-18 01:58:29.172540 | orchestrator | 2026-03-18 01:58:29.172661 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 01:58:29.172680 | orchestrator | 2026-03-18 01:58:29.172692 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 01:58:29.172705 | orchestrator | Wednesday 18 March 2026 01:58:19 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-03-18 01:58:29.172717 | orchestrator | ok: [testbed-node-0] 2026-03-18 01:58:29.172732 | orchestrator | ok: [testbed-node-2] 2026-03-18 01:58:29.172749 | orchestrator | ok: [testbed-node-1] 2026-03-18 01:58:29.172767 | orchestrator | ok: [testbed-manager] 2026-03-18 01:58:29.172785 | orchestrator | ok: [testbed-node-3] 2026-03-18 01:58:29.172803 | orchestrator | ok: [testbed-node-4] 2026-03-18 01:58:29.172821 | orchestrator | ok: [testbed-node-5] 2026-03-18 01:58:29.172835 | orchestrator | 2026-03-18 01:58:29.172846 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-18 01:58:29.172857 | orchestrator | 2026-03-18 01:58:29.172868 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-18 01:58:29.172879 | orchestrator | Wednesday 18 March 2026 01:58:28 +0000 (0:00:08.096) 0:00:08.322 ******* 2026-03-18 01:58:29.172890 | orchestrator | skipping: [testbed-manager] 2026-03-18 01:58:29.172902 | orchestrator | skipping: [testbed-node-0] 2026-03-18 01:58:29.172913 | orchestrator | skipping: [testbed-node-1] 2026-03-18 01:58:29.172924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 01:58:29.172935 | orchestrator | skipping: [testbed-node-3] 2026-03-18 01:58:29.172945 | orchestrator | skipping: [testbed-node-4] 2026-03-18 01:58:29.172956 | orchestrator | skipping: [testbed-node-5] 2026-03-18 01:58:29.172967 | orchestrator | 2026-03-18 01:58:29.172978 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 01:58:29.172990 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173002 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173013 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173024 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173121 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173162 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173218 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 01:58:29.173237 | orchestrator | 2026-03-18 01:58:29.173255 | orchestrator | 2026-03-18 01:58:29.173268 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 01:58:29.173281 | orchestrator | Wednesday 18 March 2026 01:58:28 +0000 (0:00:00.623) 0:00:08.946 ******* 2026-03-18 01:58:29.173294 | orchestrator | =============================================================================== 2026-03-18 01:58:29.173358 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.10s 2026-03-18 01:58:29.173373 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-03-18 01:58:29.532981 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-18 01:58:29.549918 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-18 01:58:29.573332 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-18 01:58:29.590080 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-18 01:58:29.604244 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-18 01:58:29.619647 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-18 01:58:29.633614 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-18 01:58:29.654166 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-18 01:58:29.670950 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-18 01:58:29.684732 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-18 01:58:29.696434 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-18 01:58:29.707087 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-18 01:58:29.723546 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-18 01:58:29.740264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-18 01:58:29.758450 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-18 01:58:29.774191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-18 01:58:29.787920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-18 01:58:29.803900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-18 01:58:29.815973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-18 01:58:29.846568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-18 01:58:29.860811 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-18 01:58:29.874657 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-18 01:58:29.886899 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-18 01:58:29.902369 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-18 01:58:30.286493 | orchestrator | ok: Runtime: 0:24:45.528266 2026-03-18 01:58:30.409696 | 2026-03-18 01:58:30.409886 | TASK [Deploy services] 2026-03-18 01:58:31.198042 | orchestrator | 2026-03-18 01:58:31.198206 | orchestrator | # DEPLOY SERVICES 2026-03-18 01:58:31.198217 | orchestrator | 2026-03-18 01:58:31.198223 | orchestrator | + set -e 2026-03-18 01:58:31.198228 | orchestrator | + echo 2026-03-18 01:58:31.198233 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-18 01:58:31.198240 | orchestrator | + echo 2026-03-18 01:58:31.198260 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 01:58:31.198269 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 01:58:31.198275 | orchestrator | ++ INTERACTIVE=false 2026-03-18 01:58:31.198279 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 01:58:31.198288 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 01:58:31.198292 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 01:58:31.198298 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 01:58:31.198302 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 01:58:31.198309 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 01:58:31.198313 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 01:58:31.198318 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 01:58:31.198322 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 01:58:31.198329 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 01:58:31.198332 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 01:58:31.198336 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 01:58:31.198341 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 01:58:31.198344 | orchestrator | ++ export ARA=false 2026-03-18 01:58:31.198348 | orchestrator | ++ ARA=false 2026-03-18 01:58:31.198352 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 01:58:31.198356 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 01:58:31.198359 | orchestrator | ++ export TEMPEST=false 2026-03-18 01:58:31.198363 | orchestrator | ++ TEMPEST=false 2026-03-18 01:58:31.198367 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 01:58:31.198370 | orchestrator | ++ IS_ZUUL=true 2026-03-18 01:58:31.198374 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:58:31.198378 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:58:31.198390 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 01:58:31.198563 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 01:58:31.198571 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 01:58:31.198574 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 01:58:31.198578 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 01:58:31.198582 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 01:58:31.198586 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 01:58:31.198594 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 01:58:31.198597 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-18 01:58:31.207763 | orchestrator | + set -e 2026-03-18 01:58:31.207819 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 01:58:31.207826 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 01:58:31.207831 | orchestrator | ++ INTERACTIVE=false 2026-03-18 01:58:31.207835 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 01:58:31.207839 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 01:58:31.207843 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 01:58:31.207849 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 01:58:31.207864 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 01:58:31.207870 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 01:58:31.209062 | orchestrator | 2026-03-18 01:58:31.209091 | orchestrator | # PULL IMAGES 2026-03-18 01:58:31.209100 | orchestrator | 2026-03-18 01:58:31.209108 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 01:58:31.209116 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 01:58:31.209123 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 01:58:31.209129 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 01:58:31.209154 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 01:58:31.209161 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 01:58:31.209167 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 01:58:31.209173 | orchestrator | ++ export ARA=false 2026-03-18 01:58:31.209180 | orchestrator | ++ ARA=false 2026-03-18 01:58:31.209194 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 01:58:31.209200 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 01:58:31.209206 | orchestrator | ++ export TEMPEST=false 2026-03-18 01:58:31.209212 | orchestrator | ++ TEMPEST=false 2026-03-18 01:58:31.209219 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 01:58:31.209224 | orchestrator | ++ IS_ZUUL=true 2026-03-18 01:58:31.209231 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:58:31.209237 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:58:31.209243 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 01:58:31.209249 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 01:58:31.209255 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 01:58:31.209261 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 01:58:31.209289 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 01:58:31.209295 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 01:58:31.209301 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 01:58:31.209307 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 01:58:31.209313 | orchestrator | + echo 2026-03-18 01:58:31.209319 | orchestrator | + echo '# PULL IMAGES' 2026-03-18 01:58:31.209325 | orchestrator | + echo 2026-03-18 01:58:31.210190 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-18 01:58:31.275858 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 01:58:31.275982 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-18 01:58:33.274532 | orchestrator | 2026-03-18 01:58:33 | INFO  | Trying to run play pull-images in environment custom 2026-03-18 01:58:43.475624 | orchestrator | 2026-03-18 01:58:43 | INFO  | Task 53a68202-7cfd-4f61-8059-b628b95c7be4 (pull-images) was prepared for execution. 2026-03-18 01:58:43.475737 | orchestrator | 2026-03-18 01:58:43 | INFO  | Task 53a68202-7cfd-4f61-8059-b628b95c7be4 is running in background. No more output. Check ARA for logs. 2026-03-18 01:58:43.941491 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-18 01:58:56.185653 | orchestrator | 2026-03-18 01:58:56 | INFO  | Task e1606723-842e-4f2c-8491-025d94dff4be (cgit) was prepared for execution. 2026-03-18 01:58:56.186623 | orchestrator | 2026-03-18 01:58:56 | INFO  | Task e1606723-842e-4f2c-8491-025d94dff4be is running in background. No more output. Check ARA for logs. 2026-03-18 01:59:08.868713 | orchestrator | 2026-03-18 01:59:08 | INFO  | Task 1287e0e5-dc23-4dff-8b18-85fb6b39c36f (dotfiles) was prepared for execution. 2026-03-18 01:59:08.868822 | orchestrator | 2026-03-18 01:59:08 | INFO  | Task 1287e0e5-dc23-4dff-8b18-85fb6b39c36f is running in background. No more output. Check ARA for logs. 2026-03-18 01:59:21.478749 | orchestrator | 2026-03-18 01:59:21 | INFO  | Task a8236e27-441a-4ab7-9656-e15a83a5b18c (homer) was prepared for execution. 2026-03-18 01:59:21.478872 | orchestrator | 2026-03-18 01:59:21 | INFO  | Task a8236e27-441a-4ab7-9656-e15a83a5b18c is running in background. No more output. Check ARA for logs. 2026-03-18 01:59:34.436962 | orchestrator | 2026-03-18 01:59:34 | INFO  | Task 6d757375-1067-4fc2-ae87-94efab8fe8d9 (phpmyadmin) was prepared for execution. 2026-03-18 01:59:34.437048 | orchestrator | 2026-03-18 01:59:34 | INFO  | Task 6d757375-1067-4fc2-ae87-94efab8fe8d9 is running in background. No more output. Check ARA for logs. 2026-03-18 01:59:47.107797 | orchestrator | 2026-03-18 01:59:47 | INFO  | Task e573afba-9743-4cfa-8cc7-5c476afbfb3f (sosreport) was prepared for execution. 2026-03-18 01:59:47.107892 | orchestrator | 2026-03-18 01:59:47 | INFO  | Task e573afba-9743-4cfa-8cc7-5c476afbfb3f is running in background. No more output. Check ARA for logs. 2026-03-18 01:59:47.541680 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-18 01:59:47.548806 | orchestrator | + set -e 2026-03-18 01:59:47.548858 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 01:59:47.548868 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 01:59:47.548877 | orchestrator | ++ INTERACTIVE=false 2026-03-18 01:59:47.548888 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 01:59:47.548895 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 01:59:47.548902 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 01:59:47.548910 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 01:59:47.548917 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 01:59:47.548924 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 01:59:47.548931 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 01:59:47.548939 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 01:59:47.548946 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 01:59:47.548953 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 01:59:47.548960 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 01:59:47.548968 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 01:59:47.548975 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 01:59:47.548982 | orchestrator | ++ export ARA=false 2026-03-18 01:59:47.548989 | orchestrator | ++ ARA=false 2026-03-18 01:59:47.548996 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 01:59:47.549036 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 01:59:47.549043 | orchestrator | ++ export TEMPEST=false 2026-03-18 01:59:47.549050 | orchestrator | ++ TEMPEST=false 2026-03-18 01:59:47.549057 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 01:59:47.549064 | orchestrator | ++ IS_ZUUL=true 2026-03-18 01:59:47.549087 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:59:47.549099 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 01:59:47.549106 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 01:59:47.549113 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 01:59:47.549120 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 01:59:47.549127 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 01:59:47.549134 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 01:59:47.549142 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 01:59:47.549149 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 01:59:47.549219 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 01:59:47.549277 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-18 01:59:47.614401 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 01:59:47.614540 | orchestrator | + osism apply frr 2026-03-18 02:00:00.357140 | orchestrator | 2026-03-18 02:00:00 | INFO  | Task 2d20ec60-8f8c-4bc6-94ba-5e20b3d2d4a0 (frr) was prepared for execution. 2026-03-18 02:00:00.357612 | orchestrator | 2026-03-18 02:00:00 | INFO  | It takes a moment until task 2d20ec60-8f8c-4bc6-94ba-5e20b3d2d4a0 (frr) has been started and output is visible here. 2026-03-18 02:00:39.573032 | orchestrator | 2026-03-18 02:00:39.573124 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-18 02:00:39.573135 | orchestrator | 2026-03-18 02:00:39.573142 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-18 02:00:39.573154 | orchestrator | Wednesday 18 March 2026 02:00:11 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-03-18 02:00:39.573161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 02:00:39.573168 | orchestrator | 2026-03-18 02:00:39.573175 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-18 02:00:39.573181 | orchestrator | Wednesday 18 March 2026 02:00:11 +0000 (0:00:00.303) 0:00:00.548 ******* 2026-03-18 02:00:39.573188 | orchestrator | changed: [testbed-manager] 2026-03-18 02:00:39.573195 | orchestrator | 2026-03-18 02:00:39.573201 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-18 02:00:39.573209 | orchestrator | Wednesday 18 March 2026 02:00:13 +0000 (0:00:02.492) 0:00:03.041 ******* 2026-03-18 02:00:39.573215 | orchestrator | changed: [testbed-manager] 2026-03-18 02:00:39.573240 | orchestrator | 2026-03-18 02:00:39.573247 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-18 02:00:39.573254 | orchestrator | Wednesday 18 March 2026 02:00:27 +0000 (0:00:13.745) 0:00:16.786 ******* 2026-03-18 02:00:39.573260 | orchestrator | ok: [testbed-manager] 2026-03-18 02:00:39.573267 | orchestrator | 2026-03-18 02:00:39.573274 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-18 02:00:39.573280 | orchestrator | Wednesday 18 March 2026 02:00:29 +0000 (0:00:01.363) 0:00:18.149 ******* 2026-03-18 02:00:39.573286 | orchestrator | changed: [testbed-manager] 2026-03-18 02:00:39.573292 | orchestrator | 2026-03-18 02:00:39.573298 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-18 02:00:39.573304 | orchestrator | Wednesday 18 March 2026 02:00:30 +0000 (0:00:01.170) 0:00:19.319 ******* 2026-03-18 02:00:39.573310 | orchestrator | ok: [testbed-manager] 2026-03-18 02:00:39.573317 | orchestrator | 2026-03-18 02:00:39.573323 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-18 02:00:39.573330 | orchestrator | Wednesday 18 March 2026 02:00:31 +0000 (0:00:01.440) 0:00:20.760 ******* 2026-03-18 02:00:39.573336 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:00:39.573342 | orchestrator | 2026-03-18 02:00:39.573348 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-18 02:00:39.573354 | orchestrator | Wednesday 18 March 2026 02:00:31 +0000 (0:00:00.239) 0:00:21.000 ******* 2026-03-18 02:00:39.573375 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:00:39.573382 | orchestrator | 2026-03-18 02:00:39.573388 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-18 02:00:39.573394 | orchestrator | Wednesday 18 March 2026 02:00:32 +0000 (0:00:00.237) 0:00:21.237 ******* 2026-03-18 02:00:39.573400 | orchestrator | changed: [testbed-manager] 2026-03-18 02:00:39.573407 | orchestrator | 2026-03-18 02:00:39.573413 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-18 02:00:39.573419 | orchestrator | Wednesday 18 March 2026 02:00:33 +0000 (0:00:01.138) 0:00:22.375 ******* 2026-03-18 02:00:39.573425 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-18 02:00:39.573431 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-18 02:00:39.573439 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-18 02:00:39.573445 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-18 02:00:39.573451 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-18 02:00:39.573458 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-18 02:00:39.573464 | orchestrator | 2026-03-18 02:00:39.573470 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-18 02:00:39.573476 | orchestrator | Wednesday 18 March 2026 02:00:35 +0000 (0:00:02.737) 0:00:25.113 ******* 2026-03-18 02:00:39.573482 | orchestrator | ok: [testbed-manager] 2026-03-18 02:00:39.573488 | orchestrator | 2026-03-18 02:00:39.573494 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-18 02:00:39.573500 | orchestrator | Wednesday 18 March 2026 02:00:37 +0000 (0:00:01.803) 0:00:26.916 ******* 2026-03-18 02:00:39.573507 | orchestrator | changed: [testbed-manager] 2026-03-18 02:00:39.573513 | orchestrator | 2026-03-18 02:00:39.573519 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:00:39.573525 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:00:39.573531 | orchestrator | 2026-03-18 02:00:39.573538 | orchestrator | 2026-03-18 02:00:39.573548 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:00:39.573554 | orchestrator | Wednesday 18 March 2026 02:00:39 +0000 (0:00:01.504) 0:00:28.420 ******* 2026-03-18 02:00:39.573560 | orchestrator | =============================================================================== 2026-03-18 02:00:39.573567 | orchestrator | osism.services.frr : Install frr package ------------------------------- 13.75s 2026-03-18 02:00:39.573573 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.74s 2026-03-18 02:00:39.573579 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.49s 2026-03-18 02:00:39.573585 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.80s 2026-03-18 02:00:39.573593 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.50s 2026-03-18 02:00:39.573613 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.44s 2026-03-18 02:00:39.573620 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.36s 2026-03-18 02:00:39.573628 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.17s 2026-03-18 02:00:39.573635 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.14s 2026-03-18 02:00:39.573642 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.30s 2026-03-18 02:00:39.573649 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.24s 2026-03-18 02:00:39.573657 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.24s 2026-03-18 02:00:39.871150 | orchestrator | + osism apply kubernetes 2026-03-18 02:00:42.045389 | orchestrator | 2026-03-18 02:00:42 | INFO  | Task 987765d8-6ad4-49cc-a632-3c77847c24fb (kubernetes) was prepared for execution. 2026-03-18 02:00:42.045512 | orchestrator | 2026-03-18 02:00:42 | INFO  | It takes a moment until task 987765d8-6ad4-49cc-a632-3c77847c24fb (kubernetes) has been started and output is visible here. 2026-03-18 02:01:09.380844 | orchestrator | 2026-03-18 02:01:09.380991 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-18 02:01:09.381008 | orchestrator | 2026-03-18 02:01:09.381021 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-18 02:01:09.381033 | orchestrator | Wednesday 18 March 2026 02:00:47 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-03-18 02:01:09.381044 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:01:09.381058 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:01:09.381068 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:01:09.381080 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:01:09.381090 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:01:09.381101 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:01:09.381112 | orchestrator | 2026-03-18 02:01:09.381123 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-18 02:01:09.381134 | orchestrator | Wednesday 18 March 2026 02:00:48 +0000 (0:00:00.828) 0:00:01.127 ******* 2026-03-18 02:01:09.381145 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.381157 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.381168 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.381178 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.381189 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.381199 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.381210 | orchestrator | 2026-03-18 02:01:09.381221 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-18 02:01:09.381234 | orchestrator | Wednesday 18 March 2026 02:00:49 +0000 (0:00:00.626) 0:00:01.754 ******* 2026-03-18 02:01:09.381275 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.381286 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.381297 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.381308 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.381319 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.381329 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.381342 | orchestrator | 2026-03-18 02:01:09.381355 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-18 02:01:09.381368 | orchestrator | Wednesday 18 March 2026 02:00:50 +0000 (0:00:00.828) 0:00:02.582 ******* 2026-03-18 02:01:09.381381 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:01:09.381393 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:01:09.381405 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:01:09.381423 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:01:09.381437 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:01:09.381449 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:01:09.381461 | orchestrator | 2026-03-18 02:01:09.381474 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-18 02:01:09.381488 | orchestrator | Wednesday 18 March 2026 02:00:52 +0000 (0:00:02.103) 0:00:04.685 ******* 2026-03-18 02:01:09.381501 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:01:09.381514 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:01:09.381527 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:01:09.381539 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:01:09.381551 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:01:09.381564 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:01:09.381576 | orchestrator | 2026-03-18 02:01:09.381589 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-18 02:01:09.381601 | orchestrator | Wednesday 18 March 2026 02:00:53 +0000 (0:00:01.160) 0:00:05.845 ******* 2026-03-18 02:01:09.381615 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:01:09.381659 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:01:09.381672 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:01:09.381684 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:01:09.381698 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:01:09.381711 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:01:09.381722 | orchestrator | 2026-03-18 02:01:09.381744 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-18 02:01:09.381755 | orchestrator | Wednesday 18 March 2026 02:00:54 +0000 (0:00:00.979) 0:00:06.824 ******* 2026-03-18 02:01:09.381766 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.381776 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.381787 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.381798 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.381808 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.381819 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.381830 | orchestrator | 2026-03-18 02:01:09.381841 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-18 02:01:09.381851 | orchestrator | Wednesday 18 March 2026 02:00:55 +0000 (0:00:00.647) 0:00:07.472 ******* 2026-03-18 02:01:09.381862 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.381873 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.381883 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.381894 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.381904 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.381915 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.381926 | orchestrator | 2026-03-18 02:01:09.381936 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-18 02:01:09.381947 | orchestrator | Wednesday 18 March 2026 02:00:55 +0000 (0:00:00.805) 0:00:08.277 ******* 2026-03-18 02:01:09.381958 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.381969 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.381979 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.381990 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.382001 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.382011 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.382087 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.382098 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.382109 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.382120 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.382152 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.382164 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.382175 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.382186 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.382196 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.382207 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 02:01:09.382218 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 02:01:09.382229 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.382316 | orchestrator | 2026-03-18 02:01:09.382330 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-18 02:01:09.382341 | orchestrator | Wednesday 18 March 2026 02:00:56 +0000 (0:00:00.668) 0:00:08.946 ******* 2026-03-18 02:01:09.382352 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.382363 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.382374 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.382395 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.382406 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.382416 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.382427 | orchestrator | 2026-03-18 02:01:09.382438 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-18 02:01:09.382450 | orchestrator | Wednesday 18 March 2026 02:00:58 +0000 (0:00:01.834) 0:00:10.780 ******* 2026-03-18 02:01:09.382461 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:01:09.382472 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:01:09.382482 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:01:09.382493 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:01:09.382504 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:01:09.382514 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:01:09.382525 | orchestrator | 2026-03-18 02:01:09.382536 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-18 02:01:09.382547 | orchestrator | Wednesday 18 March 2026 02:00:59 +0000 (0:00:01.428) 0:00:12.209 ******* 2026-03-18 02:01:09.382557 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:01:09.382568 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:01:09.382579 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:01:09.382589 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:01:09.382600 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:01:09.382611 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:01:09.382621 | orchestrator | 2026-03-18 02:01:09.382632 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-18 02:01:09.382643 | orchestrator | Wednesday 18 March 2026 02:01:05 +0000 (0:00:05.892) 0:00:18.102 ******* 2026-03-18 02:01:09.382654 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.382671 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.382682 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.382693 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.382704 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.382715 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.382725 | orchestrator | 2026-03-18 02:01:09.382736 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-18 02:01:09.382747 | orchestrator | Wednesday 18 March 2026 02:01:06 +0000 (0:00:01.202) 0:00:19.304 ******* 2026-03-18 02:01:09.382758 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.382769 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.382779 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.382790 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.382800 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.382811 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.382822 | orchestrator | 2026-03-18 02:01:09.382833 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-18 02:01:09.382846 | orchestrator | Wednesday 18 March 2026 02:01:07 +0000 (0:00:01.123) 0:00:20.428 ******* 2026-03-18 02:01:09.382856 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.382867 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.382878 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.382888 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.382899 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.382909 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.382920 | orchestrator | 2026-03-18 02:01:09.382931 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-18 02:01:09.382941 | orchestrator | Wednesday 18 March 2026 02:01:08 +0000 (0:00:00.553) 0:00:20.982 ******* 2026-03-18 02:01:09.382953 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-18 02:01:09.382971 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-18 02:01:09.382982 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:01:09.382993 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-18 02:01:09.383010 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-18 02:01:09.383021 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:01:09.383032 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-18 02:01:09.383042 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-18 02:01:09.383053 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:01:09.383064 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-18 02:01:09.383074 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-18 02:01:09.383085 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:01:09.383096 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-18 02:01:09.383107 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-18 02:01:09.383117 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:01:09.383128 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-18 02:01:09.383139 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-18 02:01:09.383149 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:01:09.383160 | orchestrator | 2026-03-18 02:01:09.383171 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-18 02:01:09.383190 | orchestrator | Wednesday 18 March 2026 02:01:09 +0000 (0:00:00.801) 0:00:21.783 ******* 2026-03-18 02:02:25.472150 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:02:25.472345 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:02:25.472369 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:02:25.472382 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.472393 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.472404 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.472416 | orchestrator | 2026-03-18 02:02:25.472429 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-18 02:02:25.472442 | orchestrator | Wednesday 18 March 2026 02:01:10 +0000 (0:00:00.695) 0:00:22.479 ******* 2026-03-18 02:02:25.472454 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:02:25.472465 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:02:25.472476 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:02:25.472487 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.472498 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.472509 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.472520 | orchestrator | 2026-03-18 02:02:25.472532 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-18 02:02:25.472543 | orchestrator | 2026-03-18 02:02:25.472555 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-18 02:02:25.472567 | orchestrator | Wednesday 18 March 2026 02:01:11 +0000 (0:00:01.546) 0:00:24.026 ******* 2026-03-18 02:02:25.472578 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.472590 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.472601 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.472612 | orchestrator | 2026-03-18 02:02:25.472623 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-18 02:02:25.472634 | orchestrator | Wednesday 18 March 2026 02:01:13 +0000 (0:00:01.725) 0:00:25.752 ******* 2026-03-18 02:02:25.472646 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.472657 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.472669 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.472682 | orchestrator | 2026-03-18 02:02:25.472695 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-18 02:02:25.472713 | orchestrator | Wednesday 18 March 2026 02:01:15 +0000 (0:00:01.774) 0:00:27.526 ******* 2026-03-18 02:02:25.472732 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.472751 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.472768 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.472787 | orchestrator | 2026-03-18 02:02:25.472805 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-18 02:02:25.472821 | orchestrator | Wednesday 18 March 2026 02:01:16 +0000 (0:00:00.967) 0:00:28.494 ******* 2026-03-18 02:02:25.472869 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.472886 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.472906 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.472924 | orchestrator | 2026-03-18 02:02:25.472942 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-18 02:02:25.472962 | orchestrator | Wednesday 18 March 2026 02:01:16 +0000 (0:00:00.701) 0:00:29.195 ******* 2026-03-18 02:02:25.472982 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.473002 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473022 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473041 | orchestrator | 2026-03-18 02:02:25.473059 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-18 02:02:25.473101 | orchestrator | Wednesday 18 March 2026 02:01:17 +0000 (0:00:00.495) 0:00:29.691 ******* 2026-03-18 02:02:25.473122 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473141 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:02:25.473161 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:02:25.473179 | orchestrator | 2026-03-18 02:02:25.473197 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-18 02:02:25.473214 | orchestrator | Wednesday 18 March 2026 02:01:18 +0000 (0:00:01.006) 0:00:30.698 ******* 2026-03-18 02:02:25.473233 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:02:25.473252 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473270 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:02:25.473287 | orchestrator | 2026-03-18 02:02:25.473370 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-18 02:02:25.473389 | orchestrator | Wednesday 18 March 2026 02:01:19 +0000 (0:00:01.377) 0:00:32.075 ******* 2026-03-18 02:02:25.473402 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:02:25.473413 | orchestrator | 2026-03-18 02:02:25.473423 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-18 02:02:25.473434 | orchestrator | Wednesday 18 March 2026 02:01:20 +0000 (0:00:00.574) 0:00:32.650 ******* 2026-03-18 02:02:25.473445 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.473455 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.473466 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.473476 | orchestrator | 2026-03-18 02:02:25.473487 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-18 02:02:25.473497 | orchestrator | Wednesday 18 March 2026 02:01:22 +0000 (0:00:01.972) 0:00:34.622 ******* 2026-03-18 02:02:25.473508 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473518 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473529 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473539 | orchestrator | 2026-03-18 02:02:25.473550 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-18 02:02:25.473561 | orchestrator | Wednesday 18 March 2026 02:01:22 +0000 (0:00:00.600) 0:00:35.223 ******* 2026-03-18 02:02:25.473571 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473581 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473592 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473602 | orchestrator | 2026-03-18 02:02:25.473613 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-18 02:02:25.473624 | orchestrator | Wednesday 18 March 2026 02:01:23 +0000 (0:00:00.818) 0:00:36.041 ******* 2026-03-18 02:02:25.473634 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473644 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473655 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473666 | orchestrator | 2026-03-18 02:02:25.473677 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-18 02:02:25.473713 | orchestrator | Wednesday 18 March 2026 02:01:24 +0000 (0:00:01.284) 0:00:37.326 ******* 2026-03-18 02:02:25.473725 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.473749 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473760 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473771 | orchestrator | 2026-03-18 02:02:25.473782 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-18 02:02:25.473792 | orchestrator | Wednesday 18 March 2026 02:01:25 +0000 (0:00:00.536) 0:00:37.862 ******* 2026-03-18 02:02:25.473803 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.473814 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.473824 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.473835 | orchestrator | 2026-03-18 02:02:25.473846 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-18 02:02:25.473857 | orchestrator | Wednesday 18 March 2026 02:01:25 +0000 (0:00:00.321) 0:00:38.183 ******* 2026-03-18 02:02:25.473867 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:02:25.473878 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:02:25.473889 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:02:25.473899 | orchestrator | 2026-03-18 02:02:25.473918 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-18 02:02:25.473929 | orchestrator | Wednesday 18 March 2026 02:01:27 +0000 (0:00:01.265) 0:00:39.449 ******* 2026-03-18 02:02:25.473940 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.473950 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.473961 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.473972 | orchestrator | 2026-03-18 02:02:25.473983 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-18 02:02:25.473993 | orchestrator | Wednesday 18 March 2026 02:01:29 +0000 (0:00:02.896) 0:00:42.346 ******* 2026-03-18 02:02:25.474004 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.474015 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.474101 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.474118 | orchestrator | 2026-03-18 02:02:25.474129 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-18 02:02:25.474140 | orchestrator | Wednesday 18 March 2026 02:01:30 +0000 (0:00:00.376) 0:00:42.722 ******* 2026-03-18 02:02:25.474156 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 02:02:25.474177 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 02:02:25.474196 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 02:02:25.474214 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 02:02:25.474233 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 02:02:25.474252 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 02:02:25.474270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-18 02:02:25.474281 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-18 02:02:25.474322 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-18 02:02:25.474340 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-18 02:02:25.474357 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-18 02:02:25.474388 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-18 02:02:25.474407 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-18 02:02:25.474426 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-18 02:02:25.474437 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-18 02:02:25.474448 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:02:25.474459 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:02:25.474469 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:02:25.474480 | orchestrator | 2026-03-18 02:02:25.474498 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-18 02:02:25.474509 | orchestrator | Wednesday 18 March 2026 02:02:24 +0000 (0:00:53.830) 0:01:36.552 ******* 2026-03-18 02:02:25.474524 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:02:25.474541 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:02:25.474559 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:02:25.474578 | orchestrator | 2026-03-18 02:02:25.474595 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-18 02:02:25.474611 | orchestrator | Wednesday 18 March 2026 02:02:24 +0000 (0:00:00.344) 0:01:36.897 ******* 2026-03-18 02:02:25.474646 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.518725 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.518843 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.518859 | orchestrator | 2026-03-18 02:03:06.518873 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-18 02:03:06.518886 | orchestrator | Wednesday 18 March 2026 02:02:25 +0000 (0:00:01.001) 0:01:37.898 ******* 2026-03-18 02:03:06.518897 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.518908 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.518919 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.518948 | orchestrator | 2026-03-18 02:03:06.518971 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-18 02:03:06.518982 | orchestrator | Wednesday 18 March 2026 02:02:26 +0000 (0:00:01.162) 0:01:39.061 ******* 2026-03-18 02:03:06.518993 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519005 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519015 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519026 | orchestrator | 2026-03-18 02:03:06.519037 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-18 02:03:06.519048 | orchestrator | Wednesday 18 March 2026 02:02:50 +0000 (0:00:24.345) 0:02:03.406 ******* 2026-03-18 02:03:06.519059 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519072 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519083 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519094 | orchestrator | 2026-03-18 02:03:06.519105 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-18 02:03:06.519115 | orchestrator | Wednesday 18 March 2026 02:02:51 +0000 (0:00:00.599) 0:02:04.006 ******* 2026-03-18 02:03:06.519127 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519138 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519149 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519159 | orchestrator | 2026-03-18 02:03:06.519170 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-18 02:03:06.519181 | orchestrator | Wednesday 18 March 2026 02:02:52 +0000 (0:00:00.637) 0:02:04.643 ******* 2026-03-18 02:03:06.519193 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519204 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519215 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519226 | orchestrator | 2026-03-18 02:03:06.519237 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-18 02:03:06.519272 | orchestrator | Wednesday 18 March 2026 02:02:52 +0000 (0:00:00.608) 0:02:05.252 ******* 2026-03-18 02:03:06.519287 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519300 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519312 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519361 | orchestrator | 2026-03-18 02:03:06.519374 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-18 02:03:06.519388 | orchestrator | Wednesday 18 March 2026 02:02:53 +0000 (0:00:00.814) 0:02:06.066 ******* 2026-03-18 02:03:06.519399 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519409 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519420 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519431 | orchestrator | 2026-03-18 02:03:06.519442 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-18 02:03:06.519453 | orchestrator | Wednesday 18 March 2026 02:02:53 +0000 (0:00:00.308) 0:02:06.375 ******* 2026-03-18 02:03:06.519464 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519475 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519486 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519496 | orchestrator | 2026-03-18 02:03:06.519507 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-18 02:03:06.519518 | orchestrator | Wednesday 18 March 2026 02:02:54 +0000 (0:00:00.634) 0:02:07.010 ******* 2026-03-18 02:03:06.519529 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519540 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519551 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519562 | orchestrator | 2026-03-18 02:03:06.519573 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-18 02:03:06.519584 | orchestrator | Wednesday 18 March 2026 02:02:55 +0000 (0:00:00.728) 0:02:07.738 ******* 2026-03-18 02:03:06.519595 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519605 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519616 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519627 | orchestrator | 2026-03-18 02:03:06.519638 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-18 02:03:06.519649 | orchestrator | Wednesday 18 March 2026 02:02:56 +0000 (0:00:00.876) 0:02:08.615 ******* 2026-03-18 02:03:06.519663 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:03:06.519674 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:03:06.519685 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:03:06.519695 | orchestrator | 2026-03-18 02:03:06.519706 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-18 02:03:06.519717 | orchestrator | Wednesday 18 March 2026 02:02:57 +0000 (0:00:01.114) 0:02:09.730 ******* 2026-03-18 02:03:06.519728 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:03:06.519739 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:03:06.519750 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:03:06.519760 | orchestrator | 2026-03-18 02:03:06.519771 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-18 02:03:06.519782 | orchestrator | Wednesday 18 March 2026 02:02:57 +0000 (0:00:00.318) 0:02:10.049 ******* 2026-03-18 02:03:06.519793 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:03:06.519804 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:03:06.519815 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:03:06.519825 | orchestrator | 2026-03-18 02:03:06.519836 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-18 02:03:06.519847 | orchestrator | Wednesday 18 March 2026 02:02:57 +0000 (0:00:00.351) 0:02:10.401 ******* 2026-03-18 02:03:06.519858 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519869 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519880 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519890 | orchestrator | 2026-03-18 02:03:06.519901 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-18 02:03:06.519912 | orchestrator | Wednesday 18 March 2026 02:02:58 +0000 (0:00:00.659) 0:02:11.060 ******* 2026-03-18 02:03:06.519933 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:03:06.519944 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:03:06.519972 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:03:06.519984 | orchestrator | 2026-03-18 02:03:06.519996 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-18 02:03:06.520008 | orchestrator | Wednesday 18 March 2026 02:02:59 +0000 (0:00:00.889) 0:02:11.949 ******* 2026-03-18 02:03:06.520019 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 02:03:06.520031 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 02:03:06.520041 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 02:03:06.520052 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 02:03:06.520063 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 02:03:06.520074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 02:03:06.520084 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 02:03:06.520096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 02:03:06.520107 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 02:03:06.520119 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-18 02:03:06.520130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 02:03:06.520140 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 02:03:06.520151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-18 02:03:06.520162 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 02:03:06.520173 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 02:03:06.520184 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 02:03:06.520194 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 02:03:06.520210 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 02:03:06.520230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 02:03:06.520251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 02:03:06.520269 | orchestrator | 2026-03-18 02:03:06.520288 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-18 02:03:06.520309 | orchestrator | 2026-03-18 02:03:06.520353 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-18 02:03:06.520371 | orchestrator | Wednesday 18 March 2026 02:03:02 +0000 (0:00:02.879) 0:02:14.829 ******* 2026-03-18 02:03:06.520389 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:03:06.520405 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:03:06.520415 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:03:06.520426 | orchestrator | 2026-03-18 02:03:06.520469 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-18 02:03:06.520492 | orchestrator | Wednesday 18 March 2026 02:03:02 +0000 (0:00:00.345) 0:02:15.174 ******* 2026-03-18 02:03:06.520503 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:03:06.520514 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:03:06.520533 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:03:06.520571 | orchestrator | 2026-03-18 02:03:06.520595 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-18 02:03:06.520613 | orchestrator | Wednesday 18 March 2026 02:03:04 +0000 (0:00:01.887) 0:02:17.062 ******* 2026-03-18 02:03:06.520631 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:03:06.520649 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:03:06.520670 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:03:06.520689 | orchestrator | 2026-03-18 02:03:06.520710 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-18 02:03:06.520730 | orchestrator | Wednesday 18 March 2026 02:03:04 +0000 (0:00:00.335) 0:02:17.398 ******* 2026-03-18 02:03:06.520743 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:03:06.520754 | orchestrator | 2026-03-18 02:03:06.520765 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-18 02:03:06.520776 | orchestrator | Wednesday 18 March 2026 02:03:05 +0000 (0:00:00.501) 0:02:17.899 ******* 2026-03-18 02:03:06.520786 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:03:06.520797 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:03:06.520808 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:03:06.520819 | orchestrator | 2026-03-18 02:03:06.520830 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-18 02:03:06.520840 | orchestrator | Wednesday 18 March 2026 02:03:05 +0000 (0:00:00.518) 0:02:18.418 ******* 2026-03-18 02:03:06.520851 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:03:06.520862 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:03:06.520873 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:03:06.520883 | orchestrator | 2026-03-18 02:03:06.520894 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-18 02:03:06.520905 | orchestrator | Wednesday 18 March 2026 02:03:06 +0000 (0:00:00.333) 0:02:18.752 ******* 2026-03-18 02:03:06.520926 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:04:35.415051 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:04:35.415208 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:04:35.415223 | orchestrator | 2026-03-18 02:04:35.415234 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-18 02:04:35.415261 | orchestrator | Wednesday 18 March 2026 02:03:06 +0000 (0:00:00.346) 0:02:19.098 ******* 2026-03-18 02:04:35.416002 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:04:35.416025 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:04:35.416036 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:04:35.416045 | orchestrator | 2026-03-18 02:04:35.416054 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-18 02:04:35.416064 | orchestrator | Wednesday 18 March 2026 02:03:07 +0000 (0:00:00.640) 0:02:19.738 ******* 2026-03-18 02:04:35.416072 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:04:35.416081 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:04:35.416090 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:04:35.416098 | orchestrator | 2026-03-18 02:04:35.416107 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-18 02:04:35.416116 | orchestrator | Wednesday 18 March 2026 02:03:08 +0000 (0:00:01.427) 0:02:21.166 ******* 2026-03-18 02:04:35.416125 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:04:35.416133 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:04:35.416142 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:04:35.416151 | orchestrator | 2026-03-18 02:04:35.416160 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-18 02:04:35.416168 | orchestrator | Wednesday 18 March 2026 02:03:09 +0000 (0:00:01.248) 0:02:22.415 ******* 2026-03-18 02:04:35.416177 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:04:35.416186 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:04:35.416194 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:04:35.416203 | orchestrator | 2026-03-18 02:04:35.416211 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-18 02:04:35.416248 | orchestrator | 2026-03-18 02:04:35.416257 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-18 02:04:35.416265 | orchestrator | Wednesday 18 March 2026 02:03:20 +0000 (0:00:10.520) 0:02:32.935 ******* 2026-03-18 02:04:35.416274 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.416283 | orchestrator | 2026-03-18 02:04:35.416292 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-18 02:04:35.416300 | orchestrator | Wednesday 18 March 2026 02:03:21 +0000 (0:00:00.868) 0:02:33.803 ******* 2026-03-18 02:04:35.416308 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416317 | orchestrator | 2026-03-18 02:04:35.416326 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-18 02:04:35.416335 | orchestrator | Wednesday 18 March 2026 02:03:22 +0000 (0:00:00.697) 0:02:34.501 ******* 2026-03-18 02:04:35.416343 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-18 02:04:35.416352 | orchestrator | 2026-03-18 02:04:35.416360 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-18 02:04:35.416369 | orchestrator | Wednesday 18 March 2026 02:03:22 +0000 (0:00:00.547) 0:02:35.049 ******* 2026-03-18 02:04:35.416377 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416416 | orchestrator | 2026-03-18 02:04:35.416426 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-18 02:04:35.416435 | orchestrator | Wednesday 18 March 2026 02:03:23 +0000 (0:00:00.968) 0:02:36.017 ******* 2026-03-18 02:04:35.416443 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416451 | orchestrator | 2026-03-18 02:04:35.416460 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-18 02:04:35.416468 | orchestrator | Wednesday 18 March 2026 02:03:24 +0000 (0:00:00.635) 0:02:36.653 ******* 2026-03-18 02:04:35.416481 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-18 02:04:35.416496 | orchestrator | 2026-03-18 02:04:35.416510 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-18 02:04:35.416523 | orchestrator | Wednesday 18 March 2026 02:03:25 +0000 (0:00:01.647) 0:02:38.300 ******* 2026-03-18 02:04:35.416537 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-18 02:04:35.416550 | orchestrator | 2026-03-18 02:04:35.416585 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-18 02:04:35.416609 | orchestrator | Wednesday 18 March 2026 02:03:26 +0000 (0:00:00.915) 0:02:39.215 ******* 2026-03-18 02:04:35.416619 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416628 | orchestrator | 2026-03-18 02:04:35.416636 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-18 02:04:35.416645 | orchestrator | Wednesday 18 March 2026 02:03:27 +0000 (0:00:00.478) 0:02:39.694 ******* 2026-03-18 02:04:35.416653 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416662 | orchestrator | 2026-03-18 02:04:35.416670 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-18 02:04:35.416678 | orchestrator | 2026-03-18 02:04:35.416687 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-18 02:04:35.416696 | orchestrator | Wednesday 18 March 2026 02:03:27 +0000 (0:00:00.454) 0:02:40.149 ******* 2026-03-18 02:04:35.416705 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.416713 | orchestrator | 2026-03-18 02:04:35.416722 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-18 02:04:35.416730 | orchestrator | Wednesday 18 March 2026 02:03:28 +0000 (0:00:00.385) 0:02:40.534 ******* 2026-03-18 02:04:35.416738 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 02:04:35.416748 | orchestrator | 2026-03-18 02:04:35.416756 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-18 02:04:35.416765 | orchestrator | Wednesday 18 March 2026 02:03:28 +0000 (0:00:00.237) 0:02:40.771 ******* 2026-03-18 02:04:35.416773 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.416782 | orchestrator | 2026-03-18 02:04:35.416799 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-18 02:04:35.416808 | orchestrator | Wednesday 18 March 2026 02:03:29 +0000 (0:00:00.861) 0:02:41.632 ******* 2026-03-18 02:04:35.416817 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.416825 | orchestrator | 2026-03-18 02:04:35.416856 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-18 02:04:35.416865 | orchestrator | Wednesday 18 March 2026 02:03:30 +0000 (0:00:01.679) 0:02:43.311 ******* 2026-03-18 02:04:35.416874 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416882 | orchestrator | 2026-03-18 02:04:35.416891 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-18 02:04:35.416899 | orchestrator | Wednesday 18 March 2026 02:03:31 +0000 (0:00:00.830) 0:02:44.142 ******* 2026-03-18 02:04:35.416908 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.416916 | orchestrator | 2026-03-18 02:04:35.416925 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-18 02:04:35.416933 | orchestrator | Wednesday 18 March 2026 02:03:32 +0000 (0:00:00.489) 0:02:44.632 ******* 2026-03-18 02:04:35.416942 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416950 | orchestrator | 2026-03-18 02:04:35.416959 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-18 02:04:35.416967 | orchestrator | Wednesday 18 March 2026 02:03:39 +0000 (0:00:07.802) 0:02:52.434 ******* 2026-03-18 02:04:35.416976 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:35.416984 | orchestrator | 2026-03-18 02:04:35.416993 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-18 02:04:35.417001 | orchestrator | Wednesday 18 March 2026 02:03:52 +0000 (0:00:12.975) 0:03:05.410 ******* 2026-03-18 02:04:35.417010 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:35.417018 | orchestrator | 2026-03-18 02:04:35.417027 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-18 02:04:35.417035 | orchestrator | 2026-03-18 02:04:35.417044 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-18 02:04:35.417052 | orchestrator | Wednesday 18 March 2026 02:03:53 +0000 (0:00:00.773) 0:03:06.184 ******* 2026-03-18 02:04:35.417061 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:04:35.417070 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:04:35.417078 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:04:35.417087 | orchestrator | 2026-03-18 02:04:35.417095 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-18 02:04:35.417104 | orchestrator | Wednesday 18 March 2026 02:03:54 +0000 (0:00:00.341) 0:03:06.525 ******* 2026-03-18 02:04:35.417112 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417121 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:04:35.417129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:04:35.417138 | orchestrator | 2026-03-18 02:04:35.417146 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-18 02:04:35.417155 | orchestrator | Wednesday 18 March 2026 02:03:54 +0000 (0:00:00.313) 0:03:06.838 ******* 2026-03-18 02:04:35.417164 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:04:35.417173 | orchestrator | 2026-03-18 02:04:35.417182 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-18 02:04:35.417190 | orchestrator | Wednesday 18 March 2026 02:03:55 +0000 (0:00:00.728) 0:03:07.567 ******* 2026-03-18 02:04:35.417199 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 02:04:35.417207 | orchestrator | 2026-03-18 02:04:35.417216 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-18 02:04:35.417224 | orchestrator | Wednesday 18 March 2026 02:03:55 +0000 (0:00:00.844) 0:03:08.411 ******* 2026-03-18 02:04:35.417233 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:04:35.417242 | orchestrator | 2026-03-18 02:04:35.417250 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-18 02:04:35.417265 | orchestrator | Wednesday 18 March 2026 02:03:56 +0000 (0:00:00.894) 0:03:09.306 ******* 2026-03-18 02:04:35.417273 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417282 | orchestrator | 2026-03-18 02:04:35.417290 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-18 02:04:35.417299 | orchestrator | Wednesday 18 March 2026 02:03:56 +0000 (0:00:00.112) 0:03:09.418 ******* 2026-03-18 02:04:35.417307 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:04:35.417316 | orchestrator | 2026-03-18 02:04:35.417324 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-18 02:04:35.417333 | orchestrator | Wednesday 18 March 2026 02:03:57 +0000 (0:00:00.992) 0:03:10.411 ******* 2026-03-18 02:04:35.417341 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417350 | orchestrator | 2026-03-18 02:04:35.417358 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-18 02:04:35.417367 | orchestrator | Wednesday 18 March 2026 02:03:58 +0000 (0:00:00.140) 0:03:10.551 ******* 2026-03-18 02:04:35.417375 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417404 | orchestrator | 2026-03-18 02:04:35.417414 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-18 02:04:35.417422 | orchestrator | Wednesday 18 March 2026 02:03:58 +0000 (0:00:00.128) 0:03:10.680 ******* 2026-03-18 02:04:35.417431 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417439 | orchestrator | 2026-03-18 02:04:35.417448 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-18 02:04:35.417461 | orchestrator | Wednesday 18 March 2026 02:03:58 +0000 (0:00:00.143) 0:03:10.823 ******* 2026-03-18 02:04:35.417470 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:35.417478 | orchestrator | 2026-03-18 02:04:35.417487 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-18 02:04:35.417495 | orchestrator | Wednesday 18 March 2026 02:03:58 +0000 (0:00:00.141) 0:03:10.965 ******* 2026-03-18 02:04:35.417504 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 02:04:35.417512 | orchestrator | 2026-03-18 02:04:35.417521 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-18 02:04:35.417529 | orchestrator | Wednesday 18 March 2026 02:04:04 +0000 (0:00:06.079) 0:03:17.045 ******* 2026-03-18 02:04:35.417538 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-18 02:04:35.417546 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-18 02:04:35.417561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-18 02:04:59.926770 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-18 02:04:59.926871 | orchestrator | 2026-03-18 02:04:59.926878 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-18 02:04:59.926883 | orchestrator | Wednesday 18 March 2026 02:04:35 +0000 (0:00:30.791) 0:03:47.837 ******* 2026-03-18 02:04:59.926888 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:04:59.926892 | orchestrator | 2026-03-18 02:04:59.926896 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-18 02:04:59.926900 | orchestrator | Wednesday 18 March 2026 02:04:36 +0000 (0:00:01.301) 0:03:49.139 ******* 2026-03-18 02:04:59.926905 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 02:04:59.926909 | orchestrator | 2026-03-18 02:04:59.926913 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-18 02:04:59.926917 | orchestrator | Wednesday 18 March 2026 02:04:38 +0000 (0:00:01.613) 0:03:50.752 ******* 2026-03-18 02:04:59.926921 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 02:04:59.926925 | orchestrator | 2026-03-18 02:04:59.926928 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-18 02:04:59.926936 | orchestrator | Wednesday 18 March 2026 02:04:39 +0000 (0:00:01.355) 0:03:52.108 ******* 2026-03-18 02:04:59.926943 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:59.926949 | orchestrator | 2026-03-18 02:04:59.926976 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-18 02:04:59.926984 | orchestrator | Wednesday 18 March 2026 02:04:39 +0000 (0:00:00.123) 0:03:52.232 ******* 2026-03-18 02:04:59.926990 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-18 02:04:59.926998 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-18 02:04:59.927004 | orchestrator | 2026-03-18 02:04:59.927010 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-18 02:04:59.927016 | orchestrator | Wednesday 18 March 2026 02:04:41 +0000 (0:00:01.936) 0:03:54.168 ******* 2026-03-18 02:04:59.927022 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:59.927028 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:04:59.927035 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:04:59.927042 | orchestrator | 2026-03-18 02:04:59.927048 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-18 02:04:59.927054 | orchestrator | Wednesday 18 March 2026 02:04:42 +0000 (0:00:00.368) 0:03:54.537 ******* 2026-03-18 02:04:59.927060 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:04:59.927067 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:04:59.927073 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:04:59.927079 | orchestrator | 2026-03-18 02:04:59.927085 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-18 02:04:59.927091 | orchestrator | 2026-03-18 02:04:59.927097 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-18 02:04:59.927103 | orchestrator | Wednesday 18 March 2026 02:04:42 +0000 (0:00:00.856) 0:03:55.393 ******* 2026-03-18 02:04:59.927109 | orchestrator | ok: [testbed-manager] 2026-03-18 02:04:59.927116 | orchestrator | 2026-03-18 02:04:59.927123 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-18 02:04:59.927130 | orchestrator | Wednesday 18 March 2026 02:04:43 +0000 (0:00:00.356) 0:03:55.750 ******* 2026-03-18 02:04:59.927137 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 02:04:59.927144 | orchestrator | 2026-03-18 02:04:59.927150 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-18 02:04:59.927156 | orchestrator | Wednesday 18 March 2026 02:04:43 +0000 (0:00:00.238) 0:03:55.989 ******* 2026-03-18 02:04:59.927162 | orchestrator | changed: [testbed-manager] 2026-03-18 02:04:59.927168 | orchestrator | 2026-03-18 02:04:59.927175 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-18 02:04:59.927181 | orchestrator | 2026-03-18 02:04:59.927188 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-18 02:04:59.927195 | orchestrator | Wednesday 18 March 2026 02:04:49 +0000 (0:00:05.814) 0:04:01.803 ******* 2026-03-18 02:04:59.927202 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:04:59.927208 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:04:59.927215 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:04:59.927222 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:04:59.927229 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:04:59.927236 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:04:59.927242 | orchestrator | 2026-03-18 02:04:59.927246 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-18 02:04:59.927250 | orchestrator | Wednesday 18 March 2026 02:04:50 +0000 (0:00:00.664) 0:04:02.468 ******* 2026-03-18 02:04:59.927254 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 02:04:59.927258 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 02:04:59.927263 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 02:04:59.927270 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 02:04:59.927290 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 02:04:59.927311 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 02:04:59.927319 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 02:04:59.927325 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 02:04:59.927331 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 02:04:59.927338 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 02:04:59.927362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 02:04:59.927370 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 02:04:59.927378 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 02:04:59.927386 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 02:04:59.927394 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 02:04:59.927420 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 02:04:59.927433 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 02:04:59.927438 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 02:04:59.927443 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 02:04:59.927447 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 02:04:59.927452 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 02:04:59.927457 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 02:04:59.927461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 02:04:59.927466 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 02:04:59.927470 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 02:04:59.927475 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 02:04:59.927479 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 02:04:59.927484 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 02:04:59.927488 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 02:04:59.927493 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 02:04:59.927497 | orchestrator | 2026-03-18 02:04:59.927502 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-18 02:04:59.927506 | orchestrator | Wednesday 18 March 2026 02:04:58 +0000 (0:00:08.608) 0:04:11.077 ******* 2026-03-18 02:04:59.927510 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:04:59.927515 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:04:59.927520 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:04:59.927524 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:59.927529 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:04:59.927533 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:04:59.927538 | orchestrator | 2026-03-18 02:04:59.927542 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-18 02:04:59.927547 | orchestrator | Wednesday 18 March 2026 02:04:59 +0000 (0:00:00.531) 0:04:11.608 ******* 2026-03-18 02:04:59.927551 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:04:59.927556 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:04:59.927560 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:04:59.927569 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:04:59.927574 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:04:59.927578 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:04:59.927582 | orchestrator | 2026-03-18 02:04:59.927587 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:04:59.927592 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:04:59.927599 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-18 02:04:59.927604 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 02:04:59.927609 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 02:04:59.927613 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 02:04:59.927618 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 02:04:59.927622 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 02:04:59.927626 | orchestrator | 2026-03-18 02:04:59.927630 | orchestrator | 2026-03-18 02:04:59.927634 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:04:59.927638 | orchestrator | Wednesday 18 March 2026 02:04:59 +0000 (0:00:00.726) 0:04:12.334 ******* 2026-03-18 02:04:59.927642 | orchestrator | =============================================================================== 2026-03-18 02:04:59.927650 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.83s 2026-03-18 02:05:00.331242 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 30.79s 2026-03-18 02:05:00.331332 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.35s 2026-03-18 02:05:00.331343 | orchestrator | kubectl : Install required packages ------------------------------------ 12.98s 2026-03-18 02:05:00.331350 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.52s 2026-03-18 02:05:00.331356 | orchestrator | Manage labels ----------------------------------------------------------- 8.61s 2026-03-18 02:05:00.331362 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.80s 2026-03-18 02:05:00.331369 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.08s 2026-03-18 02:05:00.331375 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.89s 2026-03-18 02:05:00.331381 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.81s 2026-03-18 02:05:00.331387 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.90s 2026-03-18 02:05:00.331394 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.88s 2026-03-18 02:05:00.331423 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.10s 2026-03-18 02:05:00.331430 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.97s 2026-03-18 02:05:00.331436 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.94s 2026-03-18 02:05:00.331442 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.89s 2026-03-18 02:05:00.331448 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.83s 2026-03-18 02:05:00.331454 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.77s 2026-03-18 02:05:00.331480 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.73s 2026-03-18 02:05:00.331488 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.68s 2026-03-18 02:05:00.696303 | orchestrator | + osism apply copy-kubeconfig 2026-03-18 02:05:13.033264 | orchestrator | 2026-03-18 02:05:13 | INFO  | Task 1cc5c8b6-d692-4328-9b9a-415d703dfb6f (copy-kubeconfig) was prepared for execution. 2026-03-18 02:05:13.033377 | orchestrator | 2026-03-18 02:05:13 | INFO  | It takes a moment until task 1cc5c8b6-d692-4328-9b9a-415d703dfb6f (copy-kubeconfig) has been started and output is visible here. 2026-03-18 02:05:20.354918 | orchestrator | 2026-03-18 02:05:20.355031 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-18 02:05:20.355040 | orchestrator | 2026-03-18 02:05:20.355046 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-18 02:05:20.355052 | orchestrator | Wednesday 18 March 2026 02:05:17 +0000 (0:00:00.165) 0:00:00.165 ******* 2026-03-18 02:05:20.355058 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-18 02:05:20.355063 | orchestrator | 2026-03-18 02:05:20.355068 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-18 02:05:20.355073 | orchestrator | Wednesday 18 March 2026 02:05:18 +0000 (0:00:00.750) 0:00:00.915 ******* 2026-03-18 02:05:20.355079 | orchestrator | changed: [testbed-manager] 2026-03-18 02:05:20.355084 | orchestrator | 2026-03-18 02:05:20.355108 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-18 02:05:20.355114 | orchestrator | Wednesday 18 March 2026 02:05:19 +0000 (0:00:01.240) 0:00:02.155 ******* 2026-03-18 02:05:20.355118 | orchestrator | changed: [testbed-manager] 2026-03-18 02:05:20.355123 | orchestrator | 2026-03-18 02:05:20.355128 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:05:20.355141 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:05:20.355148 | orchestrator | 2026-03-18 02:05:20.355153 | orchestrator | 2026-03-18 02:05:20.355158 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:05:20.355163 | orchestrator | Wednesday 18 March 2026 02:05:19 +0000 (0:00:00.512) 0:00:02.668 ******* 2026-03-18 02:05:20.355167 | orchestrator | =============================================================================== 2026-03-18 02:05:20.355172 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.24s 2026-03-18 02:05:20.355177 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2026-03-18 02:05:20.355183 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2026-03-18 02:05:20.713801 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-18 02:05:32.922647 | orchestrator | 2026-03-18 02:05:32 | INFO  | Task d81e980c-bb3d-4beb-9cbd-bf521f69f8b5 (openstackclient) was prepared for execution. 2026-03-18 02:05:32.922764 | orchestrator | 2026-03-18 02:05:32 | INFO  | It takes a moment until task d81e980c-bb3d-4beb-9cbd-bf521f69f8b5 (openstackclient) has been started and output is visible here. 2026-03-18 02:06:22.639307 | orchestrator | 2026-03-18 02:06:22.639415 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-18 02:06:22.639428 | orchestrator | 2026-03-18 02:06:22.639438 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-18 02:06:22.639446 | orchestrator | Wednesday 18 March 2026 02:05:37 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-03-18 02:06:22.639514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-18 02:06:22.639524 | orchestrator | 2026-03-18 02:06:22.639532 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-18 02:06:22.639541 | orchestrator | Wednesday 18 March 2026 02:05:37 +0000 (0:00:00.238) 0:00:00.486 ******* 2026-03-18 02:06:22.639572 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-18 02:06:22.639582 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-18 02:06:22.639590 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-18 02:06:22.639598 | orchestrator | 2026-03-18 02:06:22.639606 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-18 02:06:22.639614 | orchestrator | Wednesday 18 March 2026 02:05:39 +0000 (0:00:01.396) 0:00:01.882 ******* 2026-03-18 02:06:22.639622 | orchestrator | changed: [testbed-manager] 2026-03-18 02:06:22.639630 | orchestrator | 2026-03-18 02:06:22.639638 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-18 02:06:22.639646 | orchestrator | Wednesday 18 March 2026 02:05:40 +0000 (0:00:01.503) 0:00:03.386 ******* 2026-03-18 02:06:22.639654 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-18 02:06:22.639663 | orchestrator | ok: [testbed-manager] 2026-03-18 02:06:22.639672 | orchestrator | 2026-03-18 02:06:22.639680 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-18 02:06:22.639688 | orchestrator | Wednesday 18 March 2026 02:06:16 +0000 (0:00:36.268) 0:00:39.655 ******* 2026-03-18 02:06:22.639695 | orchestrator | changed: [testbed-manager] 2026-03-18 02:06:22.639703 | orchestrator | 2026-03-18 02:06:22.639711 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-18 02:06:22.639719 | orchestrator | Wednesday 18 March 2026 02:06:17 +0000 (0:00:01.007) 0:00:40.662 ******* 2026-03-18 02:06:22.639726 | orchestrator | ok: [testbed-manager] 2026-03-18 02:06:22.639734 | orchestrator | 2026-03-18 02:06:22.639742 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-18 02:06:22.639750 | orchestrator | Wednesday 18 March 2026 02:06:18 +0000 (0:00:00.669) 0:00:41.331 ******* 2026-03-18 02:06:22.639757 | orchestrator | changed: [testbed-manager] 2026-03-18 02:06:22.639765 | orchestrator | 2026-03-18 02:06:22.639773 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-18 02:06:22.639781 | orchestrator | Wednesday 18 March 2026 02:06:20 +0000 (0:00:01.720) 0:00:43.052 ******* 2026-03-18 02:06:22.639790 | orchestrator | changed: [testbed-manager] 2026-03-18 02:06:22.639805 | orchestrator | 2026-03-18 02:06:22.639819 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-18 02:06:22.639832 | orchestrator | Wednesday 18 March 2026 02:06:21 +0000 (0:00:00.855) 0:00:43.907 ******* 2026-03-18 02:06:22.639846 | orchestrator | changed: [testbed-manager] 2026-03-18 02:06:22.639861 | orchestrator | 2026-03-18 02:06:22.639870 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-18 02:06:22.639880 | orchestrator | Wednesday 18 March 2026 02:06:21 +0000 (0:00:00.637) 0:00:44.545 ******* 2026-03-18 02:06:22.639888 | orchestrator | ok: [testbed-manager] 2026-03-18 02:06:22.639898 | orchestrator | 2026-03-18 02:06:22.639908 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:06:22.639917 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:06:22.639927 | orchestrator | 2026-03-18 02:06:22.639935 | orchestrator | 2026-03-18 02:06:22.639943 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:06:22.639951 | orchestrator | Wednesday 18 March 2026 02:06:22 +0000 (0:00:00.464) 0:00:45.010 ******* 2026-03-18 02:06:22.639958 | orchestrator | =============================================================================== 2026-03-18 02:06:22.639966 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.27s 2026-03-18 02:06:22.639974 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.72s 2026-03-18 02:06:22.639981 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.50s 2026-03-18 02:06:22.639999 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.40s 2026-03-18 02:06:22.640007 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.01s 2026-03-18 02:06:22.640015 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.86s 2026-03-18 02:06:22.640023 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2026-03-18 02:06:22.640031 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.64s 2026-03-18 02:06:22.640039 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.46s 2026-03-18 02:06:22.640046 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-03-18 02:06:25.676676 | orchestrator | 2026-03-18 02:06:25 | INFO  | Task c87eec20-8841-40b0-a2d3-10c3850139e9 (common) was prepared for execution. 2026-03-18 02:06:25.676786 | orchestrator | 2026-03-18 02:06:25 | INFO  | It takes a moment until task c87eec20-8841-40b0-a2d3-10c3850139e9 (common) has been started and output is visible here. 2026-03-18 02:06:38.687712 | orchestrator | 2026-03-18 02:06:38.687816 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-18 02:06:38.687831 | orchestrator | 2026-03-18 02:06:38.687838 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 02:06:38.687843 | orchestrator | Wednesday 18 March 2026 02:06:30 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-03-18 02:06:38.687848 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:06:38.687854 | orchestrator | 2026-03-18 02:06:38.687859 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-18 02:06:38.687864 | orchestrator | Wednesday 18 March 2026 02:06:31 +0000 (0:00:01.416) 0:00:01.729 ******* 2026-03-18 02:06:38.687869 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687874 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687890 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687895 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687900 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687904 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687909 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687913 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.687919 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687938 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 02:06:38.687943 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687947 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687952 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687956 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.687961 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687965 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.687970 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.687975 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 02:06:38.687994 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.687999 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.688003 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 02:06:38.688008 | orchestrator | 2026-03-18 02:06:38.688013 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 02:06:38.688017 | orchestrator | Wednesday 18 March 2026 02:06:34 +0000 (0:00:02.846) 0:00:04.576 ******* 2026-03-18 02:06:38.688022 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:06:38.688028 | orchestrator | 2026-03-18 02:06:38.688032 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-18 02:06:38.688037 | orchestrator | Wednesday 18 March 2026 02:06:35 +0000 (0:00:01.368) 0:00:05.945 ******* 2026-03-18 02:06:38.688047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688074 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688085 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:38.688104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:38.688109 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:38.688124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893445 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:39.893558 | orchestrator | 2026-03-18 02:06:39.893567 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-18 02:06:39.893574 | orchestrator | Wednesday 18 March 2026 02:06:39 +0000 (0:00:03.665) 0:00:09.610 ******* 2026-03-18 02:06:39.893584 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:39.893591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:39.893599 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:39.893606 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:06:39.893614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:39.893631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513202 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:06:40.513269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:40.513285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:06:40.513321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:40.513337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513360 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:06:40.513389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:40.513410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513433 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:06:40.513444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:40.513455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:40.513553 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:06:40.513565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:40.513584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.407950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408075 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:06:41.408101 | orchestrator | 2026-03-18 02:06:41.408121 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-18 02:06:41.408141 | orchestrator | Wednesday 18 March 2026 02:06:40 +0000 (0:00:00.954) 0:00:10.565 ******* 2026-03-18 02:06:41.408161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:41.408183 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408195 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408205 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:06:41.408234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:41.408253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408314 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:06:41.408363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:41.408382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408416 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:06:41.408433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:41.408452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:41.408541 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:06:41.408560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:41.408624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.611972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.612071 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:06:46.612084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:46.612095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.612103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.612110 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:06:46.612117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 02:06:46.612125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.612154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:06:46.612161 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:06:46.612169 | orchestrator | 2026-03-18 02:06:46.612178 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-18 02:06:46.612187 | orchestrator | Wednesday 18 March 2026 02:06:42 +0000 (0:00:01.889) 0:00:12.454 ******* 2026-03-18 02:06:46.612194 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:06:46.612201 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:06:46.612209 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:06:46.612216 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:06:46.612237 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:06:46.612244 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:06:46.612251 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:06:46.612257 | orchestrator | 2026-03-18 02:06:46.612265 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-18 02:06:46.612273 | orchestrator | Wednesday 18 March 2026 02:06:43 +0000 (0:00:00.736) 0:00:13.190 ******* 2026-03-18 02:06:46.612279 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:06:46.612287 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:06:46.612294 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:06:46.612301 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:06:46.612309 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:06:46.612316 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:06:46.612324 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:06:46.612331 | orchestrator | 2026-03-18 02:06:46.612338 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-18 02:06:46.612345 | orchestrator | Wednesday 18 March 2026 02:06:44 +0000 (0:00:00.886) 0:00:14.077 ******* 2026-03-18 02:06:46.612353 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:46.612441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:06:49.668846 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.668942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.668958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.668997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669094 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:06:49.669185 | orchestrator | 2026-03-18 02:06:49.669197 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-18 02:06:49.669210 | orchestrator | Wednesday 18 March 2026 02:06:47 +0000 (0:00:03.583) 0:00:17.660 ******* 2026-03-18 02:06:49.669221 | orchestrator | [WARNING]: Skipped 2026-03-18 02:06:49.669233 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-18 02:06:49.669245 | orchestrator | to this access issue: 2026-03-18 02:06:49.669257 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-18 02:06:49.669268 | orchestrator | directory 2026-03-18 02:06:49.669279 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 02:06:49.669290 | orchestrator | 2026-03-18 02:06:49.669301 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-18 02:06:49.669312 | orchestrator | Wednesday 18 March 2026 02:06:48 +0000 (0:00:01.036) 0:00:18.697 ******* 2026-03-18 02:06:49.669323 | orchestrator | [WARNING]: Skipped 2026-03-18 02:06:49.669341 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-18 02:07:00.674211 | orchestrator | to this access issue: 2026-03-18 02:07:00.674343 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-18 02:07:00.674362 | orchestrator | directory 2026-03-18 02:07:00.674375 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 02:07:00.674387 | orchestrator | 2026-03-18 02:07:00.674399 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-18 02:07:00.674411 | orchestrator | Wednesday 18 March 2026 02:06:49 +0000 (0:00:01.339) 0:00:20.036 ******* 2026-03-18 02:07:00.674423 | orchestrator | [WARNING]: Skipped 2026-03-18 02:07:00.674462 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-18 02:07:00.674513 | orchestrator | to this access issue: 2026-03-18 02:07:00.674534 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-18 02:07:00.674551 | orchestrator | directory 2026-03-18 02:07:00.674562 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 02:07:00.674573 | orchestrator | 2026-03-18 02:07:00.674584 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-18 02:07:00.674596 | orchestrator | Wednesday 18 March 2026 02:06:50 +0000 (0:00:00.899) 0:00:20.935 ******* 2026-03-18 02:07:00.674607 | orchestrator | [WARNING]: Skipped 2026-03-18 02:07:00.674618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-18 02:07:00.674636 | orchestrator | to this access issue: 2026-03-18 02:07:00.674654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-18 02:07:00.674674 | orchestrator | directory 2026-03-18 02:07:00.674694 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 02:07:00.674711 | orchestrator | 2026-03-18 02:07:00.674724 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-18 02:07:00.674737 | orchestrator | Wednesday 18 March 2026 02:06:51 +0000 (0:00:00.897) 0:00:21.833 ******* 2026-03-18 02:07:00.674748 | orchestrator | changed: [testbed-manager] 2026-03-18 02:07:00.674759 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:07:00.674771 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:07:00.674781 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:07:00.674792 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:07:00.674823 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:07:00.674835 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:07:00.674846 | orchestrator | 2026-03-18 02:07:00.674857 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-18 02:07:00.674868 | orchestrator | Wednesday 18 March 2026 02:06:54 +0000 (0:00:02.785) 0:00:24.619 ******* 2026-03-18 02:07:00.674879 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674892 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674914 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674925 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674935 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674946 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 02:07:00.674957 | orchestrator | 2026-03-18 02:07:00.674968 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-18 02:07:00.674984 | orchestrator | Wednesday 18 March 2026 02:06:56 +0000 (0:00:02.298) 0:00:26.918 ******* 2026-03-18 02:07:00.674996 | orchestrator | changed: [testbed-manager] 2026-03-18 02:07:00.675007 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:07:00.675018 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:07:00.675029 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:07:00.675040 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:07:00.675051 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:07:00.675062 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:07:00.675072 | orchestrator | 2026-03-18 02:07:00.675083 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-18 02:07:00.675094 | orchestrator | Wednesday 18 March 2026 02:06:58 +0000 (0:00:02.007) 0:00:28.926 ******* 2026-03-18 02:07:00.675109 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:00.675154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:00.675168 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:00.675180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:00.675192 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:00.675204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:00.675220 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:00.675240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:00.675262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:00.675284 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:06.554283 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:06.554390 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554399 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:06.554419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:06.554444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554452 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:06.554459 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:07:06.554550 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554563 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:06.554577 | orchestrator | 2026-03-18 02:07:06.554587 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-18 02:07:06.554594 | orchestrator | Wednesday 18 March 2026 02:07:00 +0000 (0:00:01.796) 0:00:30.722 ******* 2026-03-18 02:07:06.554601 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554616 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554636 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554643 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554650 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 02:07:06.554657 | orchestrator | 2026-03-18 02:07:06.554664 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-18 02:07:06.554670 | orchestrator | Wednesday 18 March 2026 02:07:02 +0000 (0:00:01.967) 0:00:32.690 ******* 2026-03-18 02:07:06.554677 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554697 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554704 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554711 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554717 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554724 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 02:07:06.554730 | orchestrator | 2026-03-18 02:07:06.554737 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-18 02:07:06.554744 | orchestrator | Wednesday 18 March 2026 02:07:04 +0000 (0:00:01.826) 0:00:34.516 ******* 2026-03-18 02:07:06.554751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:06.554766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.080749 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.080875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.080928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.080966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.080987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 02:07:07.081005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081070 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:07:07.081246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:08:29.035919 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:08:29.036046 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:08:29.036059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:08:29.036069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:08:29.036079 | orchestrator | 2026-03-18 02:08:29.036089 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-18 02:08:29.036099 | orchestrator | Wednesday 18 March 2026 02:07:07 +0000 (0:00:02.618) 0:00:37.134 ******* 2026-03-18 02:08:29.036108 | orchestrator | changed: [testbed-manager] 2026-03-18 02:08:29.036125 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:29.036140 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:29.036155 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:29.036170 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:08:29.036185 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:08:29.036200 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:08:29.036229 | orchestrator | 2026-03-18 02:08:29.036244 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-18 02:08:29.036258 | orchestrator | Wednesday 18 March 2026 02:07:08 +0000 (0:00:01.477) 0:00:38.611 ******* 2026-03-18 02:08:29.036271 | orchestrator | changed: [testbed-manager] 2026-03-18 02:08:29.036285 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:29.036299 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:29.036313 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:29.036327 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:08:29.036342 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:08:29.036357 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:08:29.036371 | orchestrator | 2026-03-18 02:08:29.036386 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036395 | orchestrator | Wednesday 18 March 2026 02:07:09 +0000 (0:00:01.112) 0:00:39.724 ******* 2026-03-18 02:08:29.036404 | orchestrator | 2026-03-18 02:08:29.036413 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036421 | orchestrator | Wednesday 18 March 2026 02:07:09 +0000 (0:00:00.090) 0:00:39.814 ******* 2026-03-18 02:08:29.036430 | orchestrator | 2026-03-18 02:08:29.036439 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036448 | orchestrator | Wednesday 18 March 2026 02:07:09 +0000 (0:00:00.067) 0:00:39.882 ******* 2026-03-18 02:08:29.036456 | orchestrator | 2026-03-18 02:08:29.036465 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036473 | orchestrator | Wednesday 18 March 2026 02:07:09 +0000 (0:00:00.068) 0:00:39.950 ******* 2026-03-18 02:08:29.036482 | orchestrator | 2026-03-18 02:08:29.036491 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036499 | orchestrator | Wednesday 18 March 2026 02:07:10 +0000 (0:00:00.252) 0:00:40.203 ******* 2026-03-18 02:08:29.036518 | orchestrator | 2026-03-18 02:08:29.036527 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036591 | orchestrator | Wednesday 18 March 2026 02:07:10 +0000 (0:00:00.062) 0:00:40.265 ******* 2026-03-18 02:08:29.036604 | orchestrator | 2026-03-18 02:08:29.036613 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 02:08:29.036622 | orchestrator | Wednesday 18 March 2026 02:07:10 +0000 (0:00:00.084) 0:00:40.350 ******* 2026-03-18 02:08:29.036631 | orchestrator | 2026-03-18 02:08:29.036640 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-18 02:08:29.036648 | orchestrator | Wednesday 18 March 2026 02:07:10 +0000 (0:00:00.112) 0:00:40.463 ******* 2026-03-18 02:08:29.036657 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:29.036665 | orchestrator | changed: [testbed-manager] 2026-03-18 02:08:29.036674 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:29.036683 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:29.036692 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:08:29.036719 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:08:29.036728 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:08:29.036737 | orchestrator | 2026-03-18 02:08:29.036746 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-18 02:08:29.036755 | orchestrator | Wednesday 18 March 2026 02:07:45 +0000 (0:00:35.534) 0:01:15.997 ******* 2026-03-18 02:08:29.036764 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:29.036772 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:08:29.036781 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:29.036790 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:08:29.036798 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:29.036807 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:08:29.036815 | orchestrator | changed: [testbed-manager] 2026-03-18 02:08:29.036824 | orchestrator | 2026-03-18 02:08:29.036833 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-18 02:08:29.036841 | orchestrator | Wednesday 18 March 2026 02:08:18 +0000 (0:00:32.205) 0:01:48.202 ******* 2026-03-18 02:08:29.036850 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:08:29.036860 | orchestrator | ok: [testbed-manager] 2026-03-18 02:08:29.036869 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:08:29.036877 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:08:29.036886 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:08:29.036895 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:08:29.036904 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:08:29.036912 | orchestrator | 2026-03-18 02:08:29.036921 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-18 02:08:29.036930 | orchestrator | Wednesday 18 March 2026 02:08:20 +0000 (0:00:01.928) 0:01:50.131 ******* 2026-03-18 02:08:29.036938 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:29.036947 | orchestrator | changed: [testbed-manager] 2026-03-18 02:08:29.036956 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:08:29.036964 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:29.036973 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:29.036982 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:08:29.036990 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:08:29.036999 | orchestrator | 2026-03-18 02:08:29.037007 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:08:29.037017 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037047 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037060 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037069 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037085 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037094 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037103 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 02:08:29.037111 | orchestrator | 2026-03-18 02:08:29.037120 | orchestrator | 2026-03-18 02:08:29.037129 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:08:29.037137 | orchestrator | Wednesday 18 March 2026 02:08:28 +0000 (0:00:08.935) 0:01:59.066 ******* 2026-03-18 02:08:29.037146 | orchestrator | =============================================================================== 2026-03-18 02:08:29.037154 | orchestrator | common : Restart fluentd container ------------------------------------- 35.53s 2026-03-18 02:08:29.037163 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.21s 2026-03-18 02:08:29.037172 | orchestrator | common : Restart cron container ----------------------------------------- 8.94s 2026-03-18 02:08:29.037180 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.67s 2026-03-18 02:08:29.037189 | orchestrator | common : Copying over config.json files for services -------------------- 3.58s 2026-03-18 02:08:29.037197 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.85s 2026-03-18 02:08:29.037206 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.79s 2026-03-18 02:08:29.037214 | orchestrator | common : Check common containers ---------------------------------------- 2.62s 2026-03-18 02:08:29.037223 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.30s 2026-03-18 02:08:29.037231 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.01s 2026-03-18 02:08:29.037239 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.97s 2026-03-18 02:08:29.037248 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.93s 2026-03-18 02:08:29.037256 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.89s 2026-03-18 02:08:29.037272 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.83s 2026-03-18 02:08:29.037288 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.80s 2026-03-18 02:08:29.037303 | orchestrator | common : Creating log volume -------------------------------------------- 1.48s 2026-03-18 02:08:29.037326 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2026-03-18 02:08:29.495509 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2026-03-18 02:08:29.495673 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.34s 2026-03-18 02:08:29.495688 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.11s 2026-03-18 02:08:32.063206 | orchestrator | 2026-03-18 02:08:32 | INFO  | Task b98486ec-5783-4f9d-bce1-de782b38ac37 (loadbalancer) was prepared for execution. 2026-03-18 02:08:32.063312 | orchestrator | 2026-03-18 02:08:32 | INFO  | It takes a moment until task b98486ec-5783-4f9d-bce1-de782b38ac37 (loadbalancer) has been started and output is visible here. 2026-03-18 02:08:47.632421 | orchestrator | 2026-03-18 02:08:47.632706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:08:47.632745 | orchestrator | 2026-03-18 02:08:47.632767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:08:47.632787 | orchestrator | Wednesday 18 March 2026 02:08:36 +0000 (0:00:00.262) 0:00:00.263 ******* 2026-03-18 02:08:47.632805 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:08:47.632843 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:08:47.632856 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:08:47.632867 | orchestrator | 2026-03-18 02:08:47.632878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:08:47.632889 | orchestrator | Wednesday 18 March 2026 02:08:36 +0000 (0:00:00.313) 0:00:00.576 ******* 2026-03-18 02:08:47.632900 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-18 02:08:47.632911 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-18 02:08:47.632922 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-18 02:08:47.632933 | orchestrator | 2026-03-18 02:08:47.632946 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-18 02:08:47.632959 | orchestrator | 2026-03-18 02:08:47.632972 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-18 02:08:47.632984 | orchestrator | Wednesday 18 March 2026 02:08:37 +0000 (0:00:00.444) 0:00:01.021 ******* 2026-03-18 02:08:47.632998 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:08:47.633010 | orchestrator | 2026-03-18 02:08:47.633037 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-18 02:08:47.633049 | orchestrator | Wednesday 18 March 2026 02:08:38 +0000 (0:00:00.592) 0:00:01.613 ******* 2026-03-18 02:08:47.633060 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:08:47.633071 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:08:47.633081 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:08:47.633092 | orchestrator | 2026-03-18 02:08:47.633103 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-18 02:08:47.633113 | orchestrator | Wednesday 18 March 2026 02:08:39 +0000 (0:00:01.550) 0:00:03.164 ******* 2026-03-18 02:08:47.633124 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:08:47.633135 | orchestrator | 2026-03-18 02:08:47.633145 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-18 02:08:47.633155 | orchestrator | Wednesday 18 March 2026 02:08:40 +0000 (0:00:00.785) 0:00:03.949 ******* 2026-03-18 02:08:47.633166 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:08:47.633177 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:08:47.633188 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:08:47.633198 | orchestrator | 2026-03-18 02:08:47.633209 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-18 02:08:47.633220 | orchestrator | Wednesday 18 March 2026 02:08:40 +0000 (0:00:00.631) 0:00:04.581 ******* 2026-03-18 02:08:47.633230 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633241 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633262 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633283 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 02:08:47.633294 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 02:08:47.633306 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 02:08:47.633317 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 02:08:47.633327 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 02:08:47.633338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 02:08:47.633357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 02:08:47.633367 | orchestrator | 2026-03-18 02:08:47.633378 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-18 02:08:47.633389 | orchestrator | Wednesday 18 March 2026 02:08:43 +0000 (0:00:02.164) 0:00:06.745 ******* 2026-03-18 02:08:47.633399 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-18 02:08:47.633411 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-18 02:08:47.633422 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-18 02:08:47.633432 | orchestrator | 2026-03-18 02:08:47.633443 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-18 02:08:47.633454 | orchestrator | Wednesday 18 March 2026 02:08:43 +0000 (0:00:00.712) 0:00:07.458 ******* 2026-03-18 02:08:47.633464 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-18 02:08:47.633475 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-18 02:08:47.633486 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-18 02:08:47.633497 | orchestrator | 2026-03-18 02:08:47.633508 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-18 02:08:47.633518 | orchestrator | Wednesday 18 March 2026 02:08:45 +0000 (0:00:01.285) 0:00:08.743 ******* 2026-03-18 02:08:47.633529 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-18 02:08:47.633540 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:08:47.633603 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-18 02:08:47.633617 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:08:47.633628 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-18 02:08:47.633639 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:08:47.633649 | orchestrator | 2026-03-18 02:08:47.633660 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-18 02:08:47.633670 | orchestrator | Wednesday 18 March 2026 02:08:45 +0000 (0:00:00.550) 0:00:09.294 ******* 2026-03-18 02:08:47.633684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:47.633708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:47.633720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:47.633740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:08:47.633752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:08:47.633772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:08:53.017828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:08:53.017929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:08:53.017942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:08:53.017953 | orchestrator | 2026-03-18 02:08:53.017964 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-18 02:08:53.017974 | orchestrator | Wednesday 18 March 2026 02:08:47 +0000 (0:00:01.905) 0:00:11.199 ******* 2026-03-18 02:08:53.017983 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:53.017993 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:53.018002 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:53.018094 | orchestrator | 2026-03-18 02:08:53.018118 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-18 02:08:53.018136 | orchestrator | Wednesday 18 March 2026 02:08:48 +0000 (0:00:00.933) 0:00:12.133 ******* 2026-03-18 02:08:53.018146 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-18 02:08:53.018155 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-18 02:08:53.018165 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-18 02:08:53.018175 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-18 02:08:53.018185 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-18 02:08:53.018195 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-18 02:08:53.018204 | orchestrator | 2026-03-18 02:08:53.018213 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-18 02:08:53.018223 | orchestrator | Wednesday 18 March 2026 02:08:50 +0000 (0:00:01.462) 0:00:13.595 ******* 2026-03-18 02:08:53.018232 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:08:53.018241 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:08:53.018250 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:08:53.018259 | orchestrator | 2026-03-18 02:08:53.018269 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-18 02:08:53.018278 | orchestrator | Wednesday 18 March 2026 02:08:50 +0000 (0:00:00.880) 0:00:14.476 ******* 2026-03-18 02:08:53.018288 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:08:53.018299 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:08:53.018309 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:08:53.018319 | orchestrator | 2026-03-18 02:08:53.018329 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-18 02:08:53.018338 | orchestrator | Wednesday 18 March 2026 02:08:52 +0000 (0:00:01.455) 0:00:15.932 ******* 2026-03-18 02:08:53.018348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:08:53.018377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:08:53.018388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:08:53.018401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:08:53.018424 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:08:53.018435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:08:53.018484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:08:53.018496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:08:53.018506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:08:53.018516 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:08:53.018534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:08:55.909979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:08:55.910168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:08:55.910187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:08:55.910201 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:08:55.910214 | orchestrator | 2026-03-18 02:08:55.910225 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-18 02:08:55.910237 | orchestrator | Wednesday 18 March 2026 02:08:52 +0000 (0:00:00.651) 0:00:16.583 ******* 2026-03-18 02:08:55.910248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:55.910262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:55.910274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 02:08:55.910328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:08:55.910342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:08:55.910353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:08:55.910364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:08:55.910375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:08:55.910386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:08:55.910417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:04.698366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9', '__omit_place_holder__02acc24ddaf145c564898a74b3a802e3b3f2dea9'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 02:09:04.698383 | orchestrator | 2026-03-18 02:09:04.698396 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-18 02:09:04.698409 | orchestrator | Wednesday 18 March 2026 02:08:55 +0000 (0:00:02.889) 0:00:19.473 ******* 2026-03-18 02:09:04.698421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:04.698546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:04.698558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:04.698675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:04.698692 | orchestrator | 2026-03-18 02:09:04.698704 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-18 02:09:04.698716 | orchestrator | Wednesday 18 March 2026 02:08:59 +0000 (0:00:03.169) 0:00:22.642 ******* 2026-03-18 02:09:04.698728 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 02:09:04.698750 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 02:09:04.698765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 02:09:04.698778 | orchestrator | 2026-03-18 02:09:04.698792 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-18 02:09:04.698805 | orchestrator | Wednesday 18 March 2026 02:09:01 +0000 (0:00:01.946) 0:00:24.589 ******* 2026-03-18 02:09:04.698817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 02:09:04.698831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 02:09:04.698844 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 02:09:04.698859 | orchestrator | 2026-03-18 02:09:04.698872 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-18 02:09:04.698885 | orchestrator | Wednesday 18 March 2026 02:09:04 +0000 (0:00:03.100) 0:00:27.690 ******* 2026-03-18 02:09:04.698898 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:04.698912 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:04.698926 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:04.698939 | orchestrator | 2026-03-18 02:09:04.698961 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-18 02:09:16.360824 | orchestrator | Wednesday 18 March 2026 02:09:04 +0000 (0:00:00.579) 0:00:28.270 ******* 2026-03-18 02:09:16.360927 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 02:09:16.360954 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 02:09:16.360964 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 02:09:16.360973 | orchestrator | 2026-03-18 02:09:16.360982 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-18 02:09:16.360992 | orchestrator | Wednesday 18 March 2026 02:09:06 +0000 (0:00:02.085) 0:00:30.355 ******* 2026-03-18 02:09:16.361001 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 02:09:16.361011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 02:09:16.361020 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 02:09:16.361028 | orchestrator | 2026-03-18 02:09:16.361037 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-18 02:09:16.361046 | orchestrator | Wednesday 18 March 2026 02:09:08 +0000 (0:00:02.198) 0:00:32.554 ******* 2026-03-18 02:09:16.361056 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-18 02:09:16.361065 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-18 02:09:16.361074 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-18 02:09:16.361083 | orchestrator | 2026-03-18 02:09:16.361103 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-18 02:09:16.361112 | orchestrator | Wednesday 18 March 2026 02:09:10 +0000 (0:00:01.545) 0:00:34.100 ******* 2026-03-18 02:09:16.361121 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-18 02:09:16.361130 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-18 02:09:16.361139 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-18 02:09:16.361147 | orchestrator | 2026-03-18 02:09:16.361156 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-18 02:09:16.361164 | orchestrator | Wednesday 18 March 2026 02:09:11 +0000 (0:00:01.420) 0:00:35.520 ******* 2026-03-18 02:09:16.361193 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:09:16.361203 | orchestrator | 2026-03-18 02:09:16.361211 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-18 02:09:16.361220 | orchestrator | Wednesday 18 March 2026 02:09:12 +0000 (0:00:00.562) 0:00:36.083 ******* 2026-03-18 02:09:16.361231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:16.361322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:16.361331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:16.361340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:16.361349 | orchestrator | 2026-03-18 02:09:16.361358 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-18 02:09:16.361367 | orchestrator | Wednesday 18 March 2026 02:09:15 +0000 (0:00:03.262) 0:00:39.345 ******* 2026-03-18 02:09:16.361389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:17.219957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:17.220043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:17.220076 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:17.220087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:17.220096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:17.220103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:17.220111 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:17.220119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:17.220155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:17.220169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:17.220191 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:17.220202 | orchestrator | 2026-03-18 02:09:17.220215 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-18 02:09:17.220228 | orchestrator | Wednesday 18 March 2026 02:09:16 +0000 (0:00:00.586) 0:00:39.932 ******* 2026-03-18 02:09:17.220241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:17.220254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:17.220267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:17.220279 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:17.220291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:17.220319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:18.084863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:18.084975 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:18.084989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:18.084999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:18.085007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:18.085015 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:18.085023 | orchestrator | 2026-03-18 02:09:18.085032 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-18 02:09:18.085041 | orchestrator | Wednesday 18 March 2026 02:09:17 +0000 (0:00:00.854) 0:00:40.787 ******* 2026-03-18 02:09:18.085049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:18.085057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:18.085080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:18.085094 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:18.085103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:18.085111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:18.085119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:18.085150 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:18.085159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:18.085183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:18.085195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:18.085214 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:19.550748 | orchestrator | 2026-03-18 02:09:19.550839 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-18 02:09:19.550852 | orchestrator | Wednesday 18 March 2026 02:09:18 +0000 (0:00:00.856) 0:00:41.643 ******* 2026-03-18 02:09:19.550865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:19.550878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:19.550889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:19.550898 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:19.550909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:19.550918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:19.550943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:19.550972 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:19.551000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:19.551010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:19.551019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:19.551028 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:19.551037 | orchestrator | 2026-03-18 02:09:19.551046 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-18 02:09:19.551055 | orchestrator | Wednesday 18 March 2026 02:09:18 +0000 (0:00:00.620) 0:00:42.264 ******* 2026-03-18 02:09:19.551064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:19.551074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:19.551083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:19.551103 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:19.551125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:20.609722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:20.609838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:20.609857 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:20.609872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:20.609885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:20.609896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:20.609931 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:20.609944 | orchestrator | 2026-03-18 02:09:20.609956 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-18 02:09:20.609969 | orchestrator | Wednesday 18 March 2026 02:09:19 +0000 (0:00:00.856) 0:00:43.121 ******* 2026-03-18 02:09:20.609996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:20.610123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:20.610141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:20.610152 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:20.610164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:20.610175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:20.610189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:20.610212 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:20.610225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:20.610254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:22.065700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:22.065845 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:22.065873 | orchestrator | 2026-03-18 02:09:22.065893 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-18 02:09:22.065913 | orchestrator | Wednesday 18 March 2026 02:09:20 +0000 (0:00:01.053) 0:00:44.174 ******* 2026-03-18 02:09:22.065934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:22.065956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:22.066006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:22.066102 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:22.066127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:22.066167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:22.066219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:22.066240 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:22.066259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:22.066280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:22.066300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:22.066331 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:22.066346 | orchestrator | 2026-03-18 02:09:22.066363 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-18 02:09:22.066380 | orchestrator | Wednesday 18 March 2026 02:09:21 +0000 (0:00:00.649) 0:00:44.824 ******* 2026-03-18 02:09:22.066398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 02:09:22.066415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:22.066457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:28.575790 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:28.575909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 02:09:28.575928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:28.575941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:28.575980 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:28.575993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 02:09:28.576005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 02:09:28.576031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 02:09:28.576043 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:28.576055 | orchestrator | 2026-03-18 02:09:28.576067 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-18 02:09:28.576079 | orchestrator | Wednesday 18 March 2026 02:09:22 +0000 (0:00:00.810) 0:00:45.634 ******* 2026-03-18 02:09:28.576091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 02:09:28.576120 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 02:09:28.576132 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 02:09:28.576143 | orchestrator | 2026-03-18 02:09:28.576154 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-18 02:09:28.576165 | orchestrator | Wednesday 18 March 2026 02:09:23 +0000 (0:00:01.665) 0:00:47.300 ******* 2026-03-18 02:09:28.576177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 02:09:28.576188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 02:09:28.576199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 02:09:28.576210 | orchestrator | 2026-03-18 02:09:28.576221 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-18 02:09:28.576232 | orchestrator | Wednesday 18 March 2026 02:09:25 +0000 (0:00:01.734) 0:00:49.035 ******* 2026-03-18 02:09:28.576252 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 02:09:28.576263 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 02:09:28.576274 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 02:09:28.576285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 02:09:28.576296 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:28.576307 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 02:09:28.576320 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:28.576332 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 02:09:28.576345 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:28.576357 | orchestrator | 2026-03-18 02:09:28.576369 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-18 02:09:28.576382 | orchestrator | Wednesday 18 March 2026 02:09:26 +0000 (0:00:00.815) 0:00:49.851 ******* 2026-03-18 02:09:28.576396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:28.576410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:28.576429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 02:09:28.576451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:33.378595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:33.378776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 02:09:33.378791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:33.378801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:33.378810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 02:09:33.378819 | orchestrator | 2026-03-18 02:09:33.378829 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-18 02:09:33.378839 | orchestrator | Wednesday 18 March 2026 02:09:28 +0000 (0:00:02.290) 0:00:52.141 ******* 2026-03-18 02:09:33.378862 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:09:33.378877 | orchestrator | 2026-03-18 02:09:33.378889 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-18 02:09:33.378901 | orchestrator | Wednesday 18 March 2026 02:09:29 +0000 (0:00:00.800) 0:00:52.942 ******* 2026-03-18 02:09:33.378936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 02:09:33.378961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:33.378974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:33.378986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:33.378999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 02:09:33.379018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:33.379033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:33.379065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 02:09:34.076576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:34.076588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076656 | orchestrator | 2026-03-18 02:09:34.076682 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-18 02:09:34.076692 | orchestrator | Wednesday 18 March 2026 02:09:33 +0000 (0:00:04.003) 0:00:56.946 ******* 2026-03-18 02:09:34.076701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 02:09:34.076741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:34.076750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076766 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:34.076775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 02:09:34.076788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:34.076801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:34.076817 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:34.076831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 02:09:42.880340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 02:09:42.880463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:42.880475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 02:09:42.880503 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:42.880513 | orchestrator | 2026-03-18 02:09:42.880521 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-18 02:09:42.880529 | orchestrator | Wednesday 18 March 2026 02:09:34 +0000 (0:00:00.700) 0:00:57.647 ******* 2026-03-18 02:09:42.880537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880555 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:42.880575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880589 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:42.880596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-18 02:09:42.880651 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:42.880659 | orchestrator | 2026-03-18 02:09:42.880667 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-18 02:09:42.880674 | orchestrator | Wednesday 18 March 2026 02:09:35 +0000 (0:00:01.160) 0:00:58.807 ******* 2026-03-18 02:09:42.880680 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:09:42.880687 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:09:42.880694 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:09:42.880700 | orchestrator | 2026-03-18 02:09:42.880707 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-18 02:09:42.880714 | orchestrator | Wednesday 18 March 2026 02:09:36 +0000 (0:00:01.301) 0:01:00.109 ******* 2026-03-18 02:09:42.880721 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:09:42.880728 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:09:42.880735 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:09:42.880741 | orchestrator | 2026-03-18 02:09:42.880748 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-18 02:09:42.880755 | orchestrator | Wednesday 18 March 2026 02:09:38 +0000 (0:00:02.182) 0:01:02.291 ******* 2026-03-18 02:09:42.880761 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:09:42.880768 | orchestrator | 2026-03-18 02:09:42.880789 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-18 02:09:42.880796 | orchestrator | Wednesday 18 March 2026 02:09:39 +0000 (0:00:00.649) 0:01:02.941 ******* 2026-03-18 02:09:42.880805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 02:09:42.880821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:42.880834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:42.880842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 02:09:42.880849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:42.880862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 02:09:43.558824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558854 | orchestrator | 2026-03-18 02:09:43.558866 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-18 02:09:43.558877 | orchestrator | Wednesday 18 March 2026 02:09:42 +0000 (0:00:03.508) 0:01:06.449 ******* 2026-03-18 02:09:43.558889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 02:09:43.558900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 02:09:43.558955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558966 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:43.558978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:43.558998 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:43.559009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 02:09:43.559027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 02:09:53.489908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:09:53.490089 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:53.490104 | orchestrator | 2026-03-18 02:09:53.490113 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-18 02:09:53.490123 | orchestrator | Wednesday 18 March 2026 02:09:43 +0000 (0:00:00.681) 0:01:07.131 ******* 2026-03-18 02:09:53.490142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490177 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:53.490185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490200 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:53.490207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-18 02:09:53.490222 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:53.490230 | orchestrator | 2026-03-18 02:09:53.490237 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-18 02:09:53.490244 | orchestrator | Wednesday 18 March 2026 02:09:44 +0000 (0:00:00.927) 0:01:08.058 ******* 2026-03-18 02:09:53.490251 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:09:53.490259 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:09:53.490266 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:09:53.490274 | orchestrator | 2026-03-18 02:09:53.490281 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-18 02:09:53.490288 | orchestrator | Wednesday 18 March 2026 02:09:46 +0000 (0:00:01.534) 0:01:09.593 ******* 2026-03-18 02:09:53.490296 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:09:53.490303 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:09:53.490331 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:09:53.490338 | orchestrator | 2026-03-18 02:09:53.490346 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-18 02:09:53.490353 | orchestrator | Wednesday 18 March 2026 02:09:48 +0000 (0:00:02.049) 0:01:11.642 ******* 2026-03-18 02:09:53.490360 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:53.490367 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:53.490375 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:09:53.490382 | orchestrator | 2026-03-18 02:09:53.490389 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-18 02:09:53.490396 | orchestrator | Wednesday 18 March 2026 02:09:48 +0000 (0:00:00.311) 0:01:11.953 ******* 2026-03-18 02:09:53.490403 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:09:53.490411 | orchestrator | 2026-03-18 02:09:53.490418 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-18 02:09:53.490425 | orchestrator | Wednesday 18 March 2026 02:09:49 +0000 (0:00:00.717) 0:01:12.671 ******* 2026-03-18 02:09:53.490451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 02:09:53.490463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 02:09:53.490476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 02:09:53.490485 | orchestrator | 2026-03-18 02:09:53.490493 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-18 02:09:53.490502 | orchestrator | Wednesday 18 March 2026 02:09:52 +0000 (0:00:02.929) 0:01:15.601 ******* 2026-03-18 02:09:53.490511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 02:09:53.490526 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:09:53.490535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 02:09:53.490544 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:09:53.490559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 02:10:01.547031 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:01.547125 | orchestrator | 2026-03-18 02:10:01.547137 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-18 02:10:01.547146 | orchestrator | Wednesday 18 March 2026 02:09:53 +0000 (0:00:01.457) 0:01:17.058 ******* 2026-03-18 02:10:01.547170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547190 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:01.547198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547232 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:01.547240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 02:10:01.547255 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:01.547262 | orchestrator | 2026-03-18 02:10:01.547269 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-18 02:10:01.547277 | orchestrator | Wednesday 18 March 2026 02:09:55 +0000 (0:00:01.705) 0:01:18.763 ******* 2026-03-18 02:10:01.547284 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:01.547291 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:01.547298 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:01.547305 | orchestrator | 2026-03-18 02:10:01.547313 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-18 02:10:01.547323 | orchestrator | Wednesday 18 March 2026 02:09:55 +0000 (0:00:00.516) 0:01:19.280 ******* 2026-03-18 02:10:01.547330 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:01.547338 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:01.547345 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:01.547352 | orchestrator | 2026-03-18 02:10:01.547359 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-18 02:10:01.547366 | orchestrator | Wednesday 18 March 2026 02:09:57 +0000 (0:00:01.361) 0:01:20.642 ******* 2026-03-18 02:10:01.547373 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:01.547381 | orchestrator | 2026-03-18 02:10:01.547388 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-18 02:10:01.547395 | orchestrator | Wednesday 18 March 2026 02:09:58 +0000 (0:00:01.010) 0:01:21.653 ******* 2026-03-18 02:10:01.547421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 02:10:01.547438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:01.547448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:01.547457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:01.547465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 02:10:01.547478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.243736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.243872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.243895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 02:10:02.243922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.243949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.243994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.244029 | orchestrator | 2026-03-18 02:10:02.244050 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-18 02:10:02.244080 | orchestrator | Wednesday 18 March 2026 02:10:01 +0000 (0:00:03.561) 0:01:25.214 ******* 2026-03-18 02:10:02.244101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 02:10:02.244122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.244143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.244162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:02.244184 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:02.244208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 02:10:08.757224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757410 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:08.757432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 02:10:08.757490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 02:10:08.757622 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:08.757672 | orchestrator | 2026-03-18 02:10:08.757694 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-18 02:10:08.757714 | orchestrator | Wednesday 18 March 2026 02:10:02 +0000 (0:00:00.712) 0:01:25.926 ******* 2026-03-18 02:10:08.757736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757781 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:08.757803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757848 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:08.757869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-18 02:10:08.757911 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:08.757932 | orchestrator | 2026-03-18 02:10:08.757953 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-18 02:10:08.757974 | orchestrator | Wednesday 18 March 2026 02:10:03 +0000 (0:00:01.303) 0:01:27.230 ******* 2026-03-18 02:10:08.757993 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:08.758014 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:08.758105 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:08.758145 | orchestrator | 2026-03-18 02:10:08.758166 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-18 02:10:08.758186 | orchestrator | Wednesday 18 March 2026 02:10:04 +0000 (0:00:01.263) 0:01:28.493 ******* 2026-03-18 02:10:08.758205 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:08.758225 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:08.758246 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:08.758267 | orchestrator | 2026-03-18 02:10:08.758287 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-18 02:10:08.758307 | orchestrator | Wednesday 18 March 2026 02:10:06 +0000 (0:00:02.045) 0:01:30.539 ******* 2026-03-18 02:10:08.758325 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:08.758344 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:08.758364 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:08.758384 | orchestrator | 2026-03-18 02:10:08.758404 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-18 02:10:08.758424 | orchestrator | Wednesday 18 March 2026 02:10:07 +0000 (0:00:00.354) 0:01:30.893 ******* 2026-03-18 02:10:08.758443 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:08.758463 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:08.758482 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:08.758502 | orchestrator | 2026-03-18 02:10:08.758522 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-18 02:10:08.758541 | orchestrator | Wednesday 18 March 2026 02:10:07 +0000 (0:00:00.344) 0:01:31.238 ******* 2026-03-18 02:10:08.758559 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:08.758579 | orchestrator | 2026-03-18 02:10:08.758600 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-18 02:10:08.758620 | orchestrator | Wednesday 18 March 2026 02:10:08 +0000 (0:00:01.092) 0:01:32.330 ******* 2026-03-18 02:10:12.254009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 02:10:12.254155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:12.254167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 02:10:12.254200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:12.254233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:12.254278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 02:10:13.266429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:13.266439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266519 | orchestrator | 2026-03-18 02:10:13.266530 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-18 02:10:13.266546 | orchestrator | Wednesday 18 March 2026 02:10:12 +0000 (0:00:03.772) 0:01:36.103 ******* 2026-03-18 02:10:13.266555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 02:10:13.266565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:13.266574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.266600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.791714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.791845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.791859 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:13.791870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 02:10:13.791878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:13.792357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792470 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:13.792480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 02:10:13.792488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 02:10:13.792497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 02:10:13.792511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 02:10:24.442113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 02:10:24.442221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:10:24.442236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 02:10:24.442246 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:24.442256 | orchestrator | 2026-03-18 02:10:24.442266 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-18 02:10:24.442273 | orchestrator | Wednesday 18 March 2026 02:10:13 +0000 (0:00:01.262) 0:01:37.365 ******* 2026-03-18 02:10:24.442279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442292 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:24.442298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442308 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:24.442312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-18 02:10:24.442338 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:24.442346 | orchestrator | 2026-03-18 02:10:24.442354 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-18 02:10:24.442362 | orchestrator | Wednesday 18 March 2026 02:10:15 +0000 (0:00:01.377) 0:01:38.742 ******* 2026-03-18 02:10:24.442370 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:24.442378 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:24.442386 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:24.442394 | orchestrator | 2026-03-18 02:10:24.442402 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-18 02:10:24.442409 | orchestrator | Wednesday 18 March 2026 02:10:16 +0000 (0:00:01.413) 0:01:40.156 ******* 2026-03-18 02:10:24.442417 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:24.442425 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:24.442433 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:24.442441 | orchestrator | 2026-03-18 02:10:24.442449 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-18 02:10:24.442458 | orchestrator | Wednesday 18 March 2026 02:10:18 +0000 (0:00:02.064) 0:01:42.220 ******* 2026-03-18 02:10:24.442482 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:24.442492 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:24.442511 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:24.442520 | orchestrator | 2026-03-18 02:10:24.442528 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-18 02:10:24.442546 | orchestrator | Wednesday 18 March 2026 02:10:18 +0000 (0:00:00.324) 0:01:42.545 ******* 2026-03-18 02:10:24.442555 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:24.442563 | orchestrator | 2026-03-18 02:10:24.442570 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-18 02:10:24.442579 | orchestrator | Wednesday 18 March 2026 02:10:20 +0000 (0:00:01.129) 0:01:43.674 ******* 2026-03-18 02:10:24.442593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 02:10:24.442606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:24.442628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 02:10:27.582002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 02:10:27.582231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:27.582276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:27.582306 | orchestrator | 2026-03-18 02:10:27.582320 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-18 02:10:27.582334 | orchestrator | Wednesday 18 March 2026 02:10:24 +0000 (0:00:04.433) 0:01:48.108 ******* 2026-03-18 02:10:27.582350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 02:10:27.582380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:31.571508 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:31.571615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 02:10:31.571649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:31.571721 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:31.571748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 02:10:31.571760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 02:10:31.571772 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:31.571777 | orchestrator | 2026-03-18 02:10:31.571783 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-18 02:10:31.571789 | orchestrator | Wednesday 18 March 2026 02:10:27 +0000 (0:00:03.146) 0:01:51.254 ******* 2026-03-18 02:10:31.571794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:31.571805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:39.841349 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:39.841446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:39.841459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:39.841468 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:39.841476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:39.841497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 02:10:39.841505 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:39.841512 | orchestrator | 2026-03-18 02:10:39.841521 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-18 02:10:39.841529 | orchestrator | Wednesday 18 March 2026 02:10:31 +0000 (0:00:03.887) 0:01:55.141 ******* 2026-03-18 02:10:39.841536 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:39.841569 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:39.841580 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:39.841591 | orchestrator | 2026-03-18 02:10:39.841603 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-18 02:10:39.841614 | orchestrator | Wednesday 18 March 2026 02:10:32 +0000 (0:00:01.299) 0:01:56.440 ******* 2026-03-18 02:10:39.841625 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:39.841636 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:39.841648 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:39.841659 | orchestrator | 2026-03-18 02:10:39.841671 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-18 02:10:39.841763 | orchestrator | Wednesday 18 March 2026 02:10:34 +0000 (0:00:01.971) 0:01:58.412 ******* 2026-03-18 02:10:39.841776 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:39.841789 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:39.841801 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:39.841813 | orchestrator | 2026-03-18 02:10:39.841820 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-18 02:10:39.841827 | orchestrator | Wednesday 18 March 2026 02:10:35 +0000 (0:00:00.356) 0:01:58.769 ******* 2026-03-18 02:10:39.841833 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:39.841840 | orchestrator | 2026-03-18 02:10:39.841847 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-18 02:10:39.841853 | orchestrator | Wednesday 18 March 2026 02:10:36 +0000 (0:00:01.093) 0:01:59.862 ******* 2026-03-18 02:10:39.841877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 02:10:39.841888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 02:10:39.841897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 02:10:39.841905 | orchestrator | 2026-03-18 02:10:39.841913 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-18 02:10:39.841921 | orchestrator | Wednesday 18 March 2026 02:10:39 +0000 (0:00:02.936) 0:02:02.798 ******* 2026-03-18 02:10:39.841929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 02:10:39.841947 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:39.841956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 02:10:39.841964 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:39.842089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 02:10:39.842107 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:39.842116 | orchestrator | 2026-03-18 02:10:39.842123 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-18 02:10:39.842131 | orchestrator | Wednesday 18 March 2026 02:10:39 +0000 (0:00:00.409) 0:02:03.208 ******* 2026-03-18 02:10:39.842140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:39.842158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:48.748106 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:48.748259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:48.748280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:48.748294 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:48.748306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:48.748324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-18 02:10:48.748374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:48.748396 | orchestrator | 2026-03-18 02:10:48.748417 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-18 02:10:48.748438 | orchestrator | Wednesday 18 March 2026 02:10:40 +0000 (0:00:00.918) 0:02:04.127 ******* 2026-03-18 02:10:48.748454 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:48.748466 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:48.748476 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:48.748487 | orchestrator | 2026-03-18 02:10:48.748498 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-18 02:10:48.748508 | orchestrator | Wednesday 18 March 2026 02:10:41 +0000 (0:00:01.231) 0:02:05.359 ******* 2026-03-18 02:10:48.748519 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:48.748530 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:48.748540 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:48.748551 | orchestrator | 2026-03-18 02:10:48.748562 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-18 02:10:48.748572 | orchestrator | Wednesday 18 March 2026 02:10:43 +0000 (0:00:02.076) 0:02:07.435 ******* 2026-03-18 02:10:48.748583 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:48.748609 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:48.748623 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:48.748635 | orchestrator | 2026-03-18 02:10:48.748647 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-18 02:10:48.748660 | orchestrator | Wednesday 18 March 2026 02:10:44 +0000 (0:00:00.343) 0:02:07.779 ******* 2026-03-18 02:10:48.748673 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:48.748685 | orchestrator | 2026-03-18 02:10:48.748727 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-18 02:10:48.748739 | orchestrator | Wednesday 18 March 2026 02:10:45 +0000 (0:00:01.190) 0:02:08.969 ******* 2026-03-18 02:10:48.748781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 02:10:48.748820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 02:10:48.748846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 02:10:50.467205 | orchestrator | 2026-03-18 02:10:50.467341 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-18 02:10:50.467370 | orchestrator | Wednesday 18 March 2026 02:10:48 +0000 (0:00:03.349) 0:02:12.319 ******* 2026-03-18 02:10:50.467417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 02:10:50.467444 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:50.467492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 02:10:50.467542 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:50.467573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 02:10:50.467593 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:50.467611 | orchestrator | 2026-03-18 02:10:50.467627 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-18 02:10:50.467644 | orchestrator | Wednesday 18 March 2026 02:10:49 +0000 (0:00:00.747) 0:02:13.067 ******* 2026-03-18 02:10:50.467661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:50.467681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:50.467744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:50.467780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:59.562199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 02:10:59.562326 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:59.562355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:59.562377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:59.562415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:59.562435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:59.562452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 02:10:59.562469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:59.562485 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:59.562501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:59.562518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-18 02:10:59.562564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 02:10:59.562581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 02:10:59.562597 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:59.562613 | orchestrator | 2026-03-18 02:10:59.562632 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-18 02:10:59.562651 | orchestrator | Wednesday 18 March 2026 02:10:50 +0000 (0:00:00.970) 0:02:14.037 ******* 2026-03-18 02:10:59.562668 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:59.562684 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:59.562699 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:59.562789 | orchestrator | 2026-03-18 02:10:59.562801 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-18 02:10:59.562812 | orchestrator | Wednesday 18 March 2026 02:10:52 +0000 (0:00:01.652) 0:02:15.690 ******* 2026-03-18 02:10:59.562823 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:10:59.562836 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:10:59.562847 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:10:59.562858 | orchestrator | 2026-03-18 02:10:59.562870 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-18 02:10:59.562881 | orchestrator | Wednesday 18 March 2026 02:10:54 +0000 (0:00:02.123) 0:02:17.813 ******* 2026-03-18 02:10:59.562892 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:59.562902 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:59.562932 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:59.562942 | orchestrator | 2026-03-18 02:10:59.562952 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-18 02:10:59.562961 | orchestrator | Wednesday 18 March 2026 02:10:54 +0000 (0:00:00.349) 0:02:18.163 ******* 2026-03-18 02:10:59.562971 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:10:59.562981 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:10:59.562990 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:10:59.563000 | orchestrator | 2026-03-18 02:10:59.563009 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-18 02:10:59.563019 | orchestrator | Wednesday 18 March 2026 02:10:54 +0000 (0:00:00.359) 0:02:18.522 ******* 2026-03-18 02:10:59.563028 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:10:59.563038 | orchestrator | 2026-03-18 02:10:59.563048 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-18 02:10:59.563057 | orchestrator | Wednesday 18 March 2026 02:10:56 +0000 (0:00:01.231) 0:02:19.754 ******* 2026-03-18 02:10:59.563080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:10:59.563108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:10:59.563120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:10:59.563132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:10:59.563150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:11:00.231444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:11:00.231523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:11:00.231557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:11:00.231563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:11:00.231568 | orchestrator | 2026-03-18 02:11:00.231577 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-18 02:11:00.231583 | orchestrator | Wednesday 18 March 2026 02:10:59 +0000 (0:00:03.376) 0:02:23.130 ******* 2026-03-18 02:11:00.231598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:11:00.231609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:11:00.231614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:11:00.231623 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:00.231629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:11:00.231635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:11:00.231639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:11:00.231644 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:00.231656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:11:09.811019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:11:09.811130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:11:09.811142 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:09.811151 | orchestrator | 2026-03-18 02:11:09.811158 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-18 02:11:09.811166 | orchestrator | Wednesday 18 March 2026 02:11:00 +0000 (0:00:00.664) 0:02:23.795 ******* 2026-03-18 02:11:09.811174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811191 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:09.811197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811211 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:09.811218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-18 02:11:09.811230 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:09.811237 | orchestrator | 2026-03-18 02:11:09.811243 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-18 02:11:09.811249 | orchestrator | Wednesday 18 March 2026 02:11:01 +0000 (0:00:01.267) 0:02:25.062 ******* 2026-03-18 02:11:09.811256 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:09.811262 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:09.811268 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:09.811274 | orchestrator | 2026-03-18 02:11:09.811280 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-18 02:11:09.811291 | orchestrator | Wednesday 18 March 2026 02:11:02 +0000 (0:00:01.379) 0:02:26.441 ******* 2026-03-18 02:11:09.811297 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:09.811303 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:09.811309 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:09.811315 | orchestrator | 2026-03-18 02:11:09.811321 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-18 02:11:09.811328 | orchestrator | Wednesday 18 March 2026 02:11:04 +0000 (0:00:02.108) 0:02:28.550 ******* 2026-03-18 02:11:09.811334 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:09.811340 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:09.811346 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:09.811352 | orchestrator | 2026-03-18 02:11:09.811370 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-18 02:11:09.811411 | orchestrator | Wednesday 18 March 2026 02:11:05 +0000 (0:00:00.361) 0:02:28.911 ******* 2026-03-18 02:11:09.811418 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:11:09.811424 | orchestrator | 2026-03-18 02:11:09.811430 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-18 02:11:09.811436 | orchestrator | Wednesday 18 March 2026 02:11:06 +0000 (0:00:01.284) 0:02:30.196 ******* 2026-03-18 02:11:09.811444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 02:11:09.811453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:09.811461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 02:11:09.811472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:09.811486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 02:11:15.234257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:15.234349 | orchestrator | 2026-03-18 02:11:15.234358 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-18 02:11:15.234363 | orchestrator | Wednesday 18 March 2026 02:11:09 +0000 (0:00:03.179) 0:02:33.375 ******* 2026-03-18 02:11:15.234370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 02:11:15.234410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:15.234430 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:15.234436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 02:11:15.234454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:15.234459 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:15.234463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 02:11:15.234468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:11:15.234472 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:15.234476 | orchestrator | 2026-03-18 02:11:15.234483 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-18 02:11:15.234487 | orchestrator | Wednesday 18 March 2026 02:11:10 +0000 (0:00:00.647) 0:02:34.022 ******* 2026-03-18 02:11:15.234492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234507 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:15.234513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234529 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:15.234538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-18 02:11:15.234550 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:15.234555 | orchestrator | 2026-03-18 02:11:15.234562 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-18 02:11:15.234572 | orchestrator | Wednesday 18 March 2026 02:11:11 +0000 (0:00:00.951) 0:02:34.974 ******* 2026-03-18 02:11:15.234577 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:15.234583 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:15.234589 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:15.234595 | orchestrator | 2026-03-18 02:11:15.234601 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-18 02:11:15.234607 | orchestrator | Wednesday 18 March 2026 02:11:13 +0000 (0:00:01.731) 0:02:36.705 ******* 2026-03-18 02:11:15.234612 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:15.234618 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:15.234624 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:15.234630 | orchestrator | 2026-03-18 02:11:15.234636 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-18 02:11:15.234648 | orchestrator | Wednesday 18 March 2026 02:11:15 +0000 (0:00:02.098) 0:02:38.803 ******* 2026-03-18 02:11:19.819363 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:11:19.819457 | orchestrator | 2026-03-18 02:11:19.819469 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-18 02:11:19.819477 | orchestrator | Wednesday 18 March 2026 02:11:16 +0000 (0:00:01.070) 0:02:39.874 ******* 2026-03-18 02:11:19.819488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 02:11:19.819522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 02:11:19.819580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 02:11:19.819599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:19.819635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840529 | orchestrator | 2026-03-18 02:11:20.840635 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-18 02:11:20.840649 | orchestrator | Wednesday 18 March 2026 02:11:19 +0000 (0:00:03.603) 0:02:43.478 ******* 2026-03-18 02:11:20.840662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 02:11:20.840695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840771 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:20.840797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 02:11:20.840821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840843 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:20.840848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 02:11:20.840853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 02:11:20.840878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 02:11:32.329137 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:32.329261 | orchestrator | 2026-03-18 02:11:32.329283 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-18 02:11:32.329304 | orchestrator | Wednesday 18 March 2026 02:11:20 +0000 (0:00:01.029) 0:02:44.507 ******* 2026-03-18 02:11:32.329324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329365 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:32.329386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329423 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:32.329435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-18 02:11:32.329457 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:32.329468 | orchestrator | 2026-03-18 02:11:32.329479 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-18 02:11:32.329490 | orchestrator | Wednesday 18 March 2026 02:11:21 +0000 (0:00:00.957) 0:02:45.465 ******* 2026-03-18 02:11:32.329501 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:32.329512 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:32.329522 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:32.329533 | orchestrator | 2026-03-18 02:11:32.329544 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-18 02:11:32.329555 | orchestrator | Wednesday 18 March 2026 02:11:23 +0000 (0:00:01.343) 0:02:46.809 ******* 2026-03-18 02:11:32.329566 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:32.329577 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:32.329588 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:32.329598 | orchestrator | 2026-03-18 02:11:32.329609 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-18 02:11:32.329620 | orchestrator | Wednesday 18 March 2026 02:11:25 +0000 (0:00:02.122) 0:02:48.932 ******* 2026-03-18 02:11:32.329631 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:11:32.329642 | orchestrator | 2026-03-18 02:11:32.329654 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-18 02:11:32.329667 | orchestrator | Wednesday 18 March 2026 02:11:26 +0000 (0:00:01.380) 0:02:50.312 ******* 2026-03-18 02:11:32.329679 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 02:11:32.329691 | orchestrator | 2026-03-18 02:11:32.329703 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-18 02:11:32.329715 | orchestrator | Wednesday 18 March 2026 02:11:29 +0000 (0:00:03.006) 0:02:53.319 ******* 2026-03-18 02:11:32.329833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:32.329854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:32.329869 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:32.329888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:32.329912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:32.329925 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:32.329949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:34.937138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:34.937271 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:34.937296 | orchestrator | 2026-03-18 02:11:34.937316 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-18 02:11:34.937336 | orchestrator | Wednesday 18 March 2026 02:11:32 +0000 (0:00:02.570) 0:02:55.889 ******* 2026-03-18 02:11:34.937382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:34.937427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:34.937447 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:34.937492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:34.937541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:34.937555 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:34.937569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:11:34.937601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 02:11:45.205572 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:45.205723 | orchestrator | 2026-03-18 02:11:45.205856 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-18 02:11:45.205882 | orchestrator | Wednesday 18 March 2026 02:11:34 +0000 (0:00:02.618) 0:02:58.508 ******* 2026-03-18 02:11:45.205906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.205966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.205988 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:45.206123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.206159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.206177 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:45.206195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.206214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 02:11:45.206232 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:45.206251 | orchestrator | 2026-03-18 02:11:45.206269 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-18 02:11:45.206287 | orchestrator | Wednesday 18 March 2026 02:11:37 +0000 (0:00:03.026) 0:03:01.535 ******* 2026-03-18 02:11:45.206307 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:11:45.206356 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:11:45.206377 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:11:45.206403 | orchestrator | 2026-03-18 02:11:45.206415 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-18 02:11:45.206426 | orchestrator | Wednesday 18 March 2026 02:11:40 +0000 (0:00:02.163) 0:03:03.699 ******* 2026-03-18 02:11:45.206437 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:45.206448 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:45.206458 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:45.206469 | orchestrator | 2026-03-18 02:11:45.206480 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-18 02:11:45.206491 | orchestrator | Wednesday 18 March 2026 02:11:41 +0000 (0:00:01.567) 0:03:05.266 ******* 2026-03-18 02:11:45.206501 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:45.206512 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:45.206523 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:45.206533 | orchestrator | 2026-03-18 02:11:45.206544 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-18 02:11:45.206555 | orchestrator | Wednesday 18 March 2026 02:11:42 +0000 (0:00:00.343) 0:03:05.610 ******* 2026-03-18 02:11:45.206565 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:11:45.206576 | orchestrator | 2026-03-18 02:11:45.206588 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-18 02:11:45.206599 | orchestrator | Wednesday 18 March 2026 02:11:43 +0000 (0:00:01.418) 0:03:07.029 ******* 2026-03-18 02:11:45.206619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 02:11:45.206635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 02:11:45.206647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 02:11:45.206659 | orchestrator | 2026-03-18 02:11:45.206670 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-18 02:11:45.206682 | orchestrator | Wednesday 18 March 2026 02:11:44 +0000 (0:00:01.522) 0:03:08.551 ******* 2026-03-18 02:11:45.206709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 02:11:53.963872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 02:11:53.963975 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:53.963989 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:53.964000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 02:11:53.964010 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:53.964019 | orchestrator | 2026-03-18 02:11:53.964029 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-18 02:11:53.964039 | orchestrator | Wednesday 18 March 2026 02:11:45 +0000 (0:00:00.439) 0:03:08.991 ******* 2026-03-18 02:11:53.964049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 02:11:53.964060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 02:11:53.964068 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:53.964077 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:53.964086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 02:11:53.964137 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:53.964147 | orchestrator | 2026-03-18 02:11:53.964156 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-18 02:11:53.964185 | orchestrator | Wednesday 18 March 2026 02:11:46 +0000 (0:00:00.921) 0:03:09.912 ******* 2026-03-18 02:11:53.964194 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:53.964203 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:53.964211 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:53.964220 | orchestrator | 2026-03-18 02:11:53.964228 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-18 02:11:53.964237 | orchestrator | Wednesday 18 March 2026 02:11:46 +0000 (0:00:00.483) 0:03:10.396 ******* 2026-03-18 02:11:53.964245 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:53.964254 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:53.964262 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:53.964270 | orchestrator | 2026-03-18 02:11:53.964279 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-18 02:11:53.964287 | orchestrator | Wednesday 18 March 2026 02:11:48 +0000 (0:00:01.389) 0:03:11.786 ******* 2026-03-18 02:11:53.964296 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:53.964304 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:53.964313 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:11:53.964321 | orchestrator | 2026-03-18 02:11:53.964330 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-18 02:11:53.964338 | orchestrator | Wednesday 18 March 2026 02:11:48 +0000 (0:00:00.355) 0:03:12.141 ******* 2026-03-18 02:11:53.964347 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:11:53.964355 | orchestrator | 2026-03-18 02:11:53.964364 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-18 02:11:53.964372 | orchestrator | Wednesday 18 March 2026 02:11:50 +0000 (0:00:01.591) 0:03:13.732 ******* 2026-03-18 02:11:53.964396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:11:53.964414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:53.964426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:53.964446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:53.964457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:53.964476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.203553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.203638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.203648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.203669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:54.203688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.203694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:54.203709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.203722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.203731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:11:54.203745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:54.203776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:11:54.203782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.203792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:11:54.341347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:54.341369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.341428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.341435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.341450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:54.341467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.556015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:54.556134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.556161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.556185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:54.556206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.556227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:54.556293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.556329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:54.556341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.556354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:54.556366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:54.556379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:11:54.556415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.688322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:55.689351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.689393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:11:55.689410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:55.689421 | orchestrator | 2026-03-18 02:11:55.689434 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-18 02:11:55.689445 | orchestrator | Wednesday 18 March 2026 02:11:54 +0000 (0:00:04.394) 0:03:18.126 ******* 2026-03-18 02:11:55.689496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:11:55.689532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:11:55.689544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.689555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.689566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.689589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.689607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:55.806448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:55.806484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.806553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.806565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.806576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.806587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.806619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:55.806638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:55.881499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.881593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.881609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:55.881646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:55.881675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.881688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:55.881716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.881729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:55.881743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:11:55.881789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:11:55.881804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:11:55.881824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:56.162109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:56.162238 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:11:56.162251 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:11:56.162262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-18 02:11:56.162323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:56.162356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:11:56.162368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:11:56.162395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 02:11:56.162405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-18 02:11:56.162423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-18 02:12:07.422476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 02:12:07.422586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 02:12:07.422611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:12:07.422617 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:07.422622 | orchestrator | 2026-03-18 02:12:07.422627 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-18 02:12:07.422632 | orchestrator | Wednesday 18 March 2026 02:11:56 +0000 (0:00:01.605) 0:03:19.732 ******* 2026-03-18 02:12:07.422637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422649 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:07.422653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422661 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:07.422665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-18 02:12:07.422672 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:07.422681 | orchestrator | 2026-03-18 02:12:07.422685 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-18 02:12:07.422689 | orchestrator | Wednesday 18 March 2026 02:11:58 +0000 (0:00:02.242) 0:03:21.974 ******* 2026-03-18 02:12:07.422693 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:07.422697 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:07.422712 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:07.422719 | orchestrator | 2026-03-18 02:12:07.422725 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-18 02:12:07.422731 | orchestrator | Wednesday 18 March 2026 02:11:59 +0000 (0:00:01.319) 0:03:23.294 ******* 2026-03-18 02:12:07.422737 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:07.422742 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:07.422748 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:07.422755 | orchestrator | 2026-03-18 02:12:07.422833 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-18 02:12:07.422837 | orchestrator | Wednesday 18 March 2026 02:12:01 +0000 (0:00:02.258) 0:03:25.552 ******* 2026-03-18 02:12:07.422841 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:12:07.422845 | orchestrator | 2026-03-18 02:12:07.422848 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-18 02:12:07.422852 | orchestrator | Wednesday 18 March 2026 02:12:03 +0000 (0:00:01.316) 0:03:26.869 ******* 2026-03-18 02:12:07.422857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:07.422868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:07.422872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:07.422881 | orchestrator | 2026-03-18 02:12:07.422885 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-18 02:12:07.422889 | orchestrator | Wednesday 18 March 2026 02:12:06 +0000 (0:00:03.598) 0:03:30.467 ******* 2026-03-18 02:12:07.422899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:12:18.340422 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:18.341281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:12:18.341331 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:18.341353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:12:18.341360 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:18.341365 | orchestrator | 2026-03-18 02:12:18.341371 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-18 02:12:18.341377 | orchestrator | Wednesday 18 March 2026 02:12:07 +0000 (0:00:00.524) 0:03:30.992 ******* 2026-03-18 02:12:18.341382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341418 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:18.341425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341438 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:18.341445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-18 02:12:18.341458 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:18.341465 | orchestrator | 2026-03-18 02:12:18.341472 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-18 02:12:18.341479 | orchestrator | Wednesday 18 March 2026 02:12:08 +0000 (0:00:00.838) 0:03:31.831 ******* 2026-03-18 02:12:18.341486 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:18.341490 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:18.341494 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:18.341498 | orchestrator | 2026-03-18 02:12:18.341502 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-18 02:12:18.341507 | orchestrator | Wednesday 18 March 2026 02:12:10 +0000 (0:00:01.971) 0:03:33.802 ******* 2026-03-18 02:12:18.341511 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:18.341515 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:18.341537 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:18.341541 | orchestrator | 2026-03-18 02:12:18.341545 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-18 02:12:18.341549 | orchestrator | Wednesday 18 March 2026 02:12:12 +0000 (0:00:01.900) 0:03:35.703 ******* 2026-03-18 02:12:18.341554 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:12:18.341558 | orchestrator | 2026-03-18 02:12:18.341562 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-18 02:12:18.341566 | orchestrator | Wednesday 18 March 2026 02:12:13 +0000 (0:00:01.761) 0:03:37.465 ******* 2026-03-18 02:12:18.341573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:18.341589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:18.341597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:18.341610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:19.653044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 02:12:19.653207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653222 | orchestrator | 2026-03-18 02:12:19.653229 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-18 02:12:19.653236 | orchestrator | Wednesday 18 March 2026 02:12:18 +0000 (0:00:04.445) 0:03:41.911 ******* 2026-03-18 02:12:19.653258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 02:12:19.653265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:19.653305 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:19.653312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 02:12:19.653323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:31.175145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:31.175225 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:31.175247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 02:12:31.175269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 02:12:31.175273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 02:12:31.175277 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:31.175281 | orchestrator | 2026-03-18 02:12:31.175286 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-18 02:12:31.175291 | orchestrator | Wednesday 18 March 2026 02:12:19 +0000 (0:00:01.309) 0:03:43.221 ******* 2026-03-18 02:12:31.175295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175328 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:31.175332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175352 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:31.175356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-18 02:12:31.175374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:31.175378 | orchestrator | 2026-03-18 02:12:31.175382 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-18 02:12:31.175385 | orchestrator | Wednesday 18 March 2026 02:12:20 +0000 (0:00:00.956) 0:03:44.177 ******* 2026-03-18 02:12:31.175389 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:31.175393 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:31.175397 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:31.175400 | orchestrator | 2026-03-18 02:12:31.175404 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-18 02:12:31.175408 | orchestrator | Wednesday 18 March 2026 02:12:21 +0000 (0:00:01.397) 0:03:45.575 ******* 2026-03-18 02:12:31.175412 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:31.175415 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:31.175419 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:31.175423 | orchestrator | 2026-03-18 02:12:31.175426 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-18 02:12:31.175430 | orchestrator | Wednesday 18 March 2026 02:12:24 +0000 (0:00:02.201) 0:03:47.776 ******* 2026-03-18 02:12:31.175434 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:12:31.175437 | orchestrator | 2026-03-18 02:12:31.175441 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-18 02:12:31.175445 | orchestrator | Wednesday 18 March 2026 02:12:26 +0000 (0:00:01.854) 0:03:49.631 ******* 2026-03-18 02:12:31.175448 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-18 02:12:31.175453 | orchestrator | 2026-03-18 02:12:31.175457 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-18 02:12:31.175460 | orchestrator | Wednesday 18 March 2026 02:12:27 +0000 (0:00:00.965) 0:03:50.597 ******* 2026-03-18 02:12:31.175465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 02:12:31.175478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 02:12:43.529347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 02:12:43.529461 | orchestrator | 2026-03-18 02:12:43.529475 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-18 02:12:43.529483 | orchestrator | Wednesday 18 March 2026 02:12:31 +0000 (0:00:04.149) 0:03:54.746 ******* 2026-03-18 02:12:43.529491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529498 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:43.529521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529528 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:43.529535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529541 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:43.529547 | orchestrator | 2026-03-18 02:12:43.529553 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-18 02:12:43.529560 | orchestrator | Wednesday 18 March 2026 02:12:32 +0000 (0:00:01.515) 0:03:56.262 ******* 2026-03-18 02:12:43.529568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529585 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:43.529612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529625 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:43.529631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 02:12:43.529660 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:43.529667 | orchestrator | 2026-03-18 02:12:43.529673 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 02:12:43.529679 | orchestrator | Wednesday 18 March 2026 02:12:34 +0000 (0:00:01.678) 0:03:57.941 ******* 2026-03-18 02:12:43.529685 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:43.529691 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:43.529696 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:43.529701 | orchestrator | 2026-03-18 02:12:43.529707 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 02:12:43.529713 | orchestrator | Wednesday 18 March 2026 02:12:36 +0000 (0:00:02.544) 0:04:00.486 ******* 2026-03-18 02:12:43.529719 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:12:43.529724 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:12:43.529730 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:12:43.529736 | orchestrator | 2026-03-18 02:12:43.529741 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-18 02:12:43.529747 | orchestrator | Wednesday 18 March 2026 02:12:39 +0000 (0:00:03.007) 0:04:03.493 ******* 2026-03-18 02:12:43.529754 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-18 02:12:43.529761 | orchestrator | 2026-03-18 02:12:43.529767 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-18 02:12:43.529773 | orchestrator | Wednesday 18 March 2026 02:12:41 +0000 (0:00:01.184) 0:04:04.678 ******* 2026-03-18 02:12:43.529812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529820 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:43.529827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529833 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:12:43.529847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529854 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:12:43.529860 | orchestrator | 2026-03-18 02:12:43.529866 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-18 02:12:43.529873 | orchestrator | Wednesday 18 March 2026 02:12:42 +0000 (0:00:01.075) 0:04:05.754 ******* 2026-03-18 02:12:43.529879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529886 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:12:43.529892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:12:43.529906 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:07.650578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 02:13:07.650662 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:07.650671 | orchestrator | 2026-03-18 02:13:07.650677 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-18 02:13:07.650683 | orchestrator | Wednesday 18 March 2026 02:12:43 +0000 (0:00:01.338) 0:04:07.092 ******* 2026-03-18 02:13:07.650689 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:07.650694 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:07.650699 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:07.650704 | orchestrator | 2026-03-18 02:13:07.650709 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 02:13:07.650714 | orchestrator | Wednesday 18 March 2026 02:12:45 +0000 (0:00:01.655) 0:04:08.747 ******* 2026-03-18 02:13:07.650719 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:13:07.650725 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:13:07.650730 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:13:07.650735 | orchestrator | 2026-03-18 02:13:07.650740 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 02:13:07.650745 | orchestrator | Wednesday 18 March 2026 02:12:47 +0000 (0:00:02.733) 0:04:11.481 ******* 2026-03-18 02:13:07.650750 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:13:07.650755 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:13:07.650777 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:13:07.650782 | orchestrator | 2026-03-18 02:13:07.650787 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-18 02:13:07.650791 | orchestrator | Wednesday 18 March 2026 02:12:50 +0000 (0:00:02.698) 0:04:14.179 ******* 2026-03-18 02:13:07.650868 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-18 02:13:07.650881 | orchestrator | 2026-03-18 02:13:07.650889 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-18 02:13:07.650897 | orchestrator | Wednesday 18 March 2026 02:12:51 +0000 (0:00:01.264) 0:04:15.444 ******* 2026-03-18 02:13:07.650906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.650913 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:07.650921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.650929 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:07.650937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.650945 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:07.650953 | orchestrator | 2026-03-18 02:13:07.650960 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-18 02:13:07.650970 | orchestrator | Wednesday 18 March 2026 02:12:53 +0000 (0:00:01.398) 0:04:16.843 ******* 2026-03-18 02:13:07.650995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.651004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:07.651013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.651030 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:07.651038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 02:13:07.651046 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:07.651054 | orchestrator | 2026-03-18 02:13:07.651061 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-18 02:13:07.651071 | orchestrator | Wednesday 18 March 2026 02:12:54 +0000 (0:00:01.428) 0:04:18.271 ******* 2026-03-18 02:13:07.651076 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:07.651080 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:07.651085 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:07.651090 | orchestrator | 2026-03-18 02:13:07.651095 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 02:13:07.651100 | orchestrator | Wednesday 18 March 2026 02:12:56 +0000 (0:00:02.069) 0:04:20.341 ******* 2026-03-18 02:13:07.651104 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:13:07.651109 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:13:07.651114 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:13:07.651119 | orchestrator | 2026-03-18 02:13:07.651124 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 02:13:07.651129 | orchestrator | Wednesday 18 March 2026 02:12:59 +0000 (0:00:02.482) 0:04:22.824 ******* 2026-03-18 02:13:07.651134 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:13:07.651139 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:13:07.651145 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:13:07.651150 | orchestrator | 2026-03-18 02:13:07.651156 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-18 02:13:07.651161 | orchestrator | Wednesday 18 March 2026 02:13:02 +0000 (0:00:03.350) 0:04:26.174 ******* 2026-03-18 02:13:07.651167 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:13:07.651172 | orchestrator | 2026-03-18 02:13:07.651178 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-18 02:13:07.651185 | orchestrator | Wednesday 18 March 2026 02:13:04 +0000 (0:00:01.749) 0:04:27.924 ******* 2026-03-18 02:13:07.651195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:07.651210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:07.651234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.393999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.394136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:08.394149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:08.394157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:08.394165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.394191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.394212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:08.394219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:08.394226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:08.394237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.394280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.394301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:08.394312 | orchestrator | 2026-03-18 02:13:08.394325 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-18 02:13:08.394335 | orchestrator | Wednesday 18 March 2026 02:13:07 +0000 (0:00:03.442) 0:04:31.367 ******* 2026-03-18 02:13:08.394357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 02:13:08.553437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:08.553525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.553536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.553544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:08.553568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:08.553577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 02:13:08.553585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:08.553609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.553618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.553624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:08.553631 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:08.553638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 02:13:08.553649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 02:13:08.553656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 02:13:08.553671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 02:13:20.879934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 02:13:20.880075 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:20.880094 | orchestrator | 2026-03-18 02:13:20.880107 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-18 02:13:20.880119 | orchestrator | Wednesday 18 March 2026 02:13:08 +0000 (0:00:00.756) 0:04:32.124 ******* 2026-03-18 02:13:20.880131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880182 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:20.880195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880217 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:20.880228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 02:13:20.880250 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:20.880260 | orchestrator | 2026-03-18 02:13:20.880272 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-18 02:13:20.880282 | orchestrator | Wednesday 18 March 2026 02:13:09 +0000 (0:00:01.054) 0:04:33.179 ******* 2026-03-18 02:13:20.880293 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:13:20.880304 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:13:20.880315 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:13:20.880325 | orchestrator | 2026-03-18 02:13:20.880336 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-18 02:13:20.880346 | orchestrator | Wednesday 18 March 2026 02:13:11 +0000 (0:00:01.835) 0:04:35.014 ******* 2026-03-18 02:13:20.880357 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:13:20.880367 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:13:20.880378 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:13:20.880389 | orchestrator | 2026-03-18 02:13:20.880400 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-18 02:13:20.880411 | orchestrator | Wednesday 18 March 2026 02:13:13 +0000 (0:00:02.262) 0:04:37.277 ******* 2026-03-18 02:13:20.880422 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:13:20.880435 | orchestrator | 2026-03-18 02:13:20.880447 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-18 02:13:20.880460 | orchestrator | Wednesday 18 March 2026 02:13:15 +0000 (0:00:01.458) 0:04:38.736 ******* 2026-03-18 02:13:20.880491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:13:20.880528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:13:20.880552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:13:20.880566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:13:20.880582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:13:20.880611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:13:22.954541 | orchestrator | 2026-03-18 02:13:22.954641 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-18 02:13:22.954657 | orchestrator | Wednesday 18 March 2026 02:13:20 +0000 (0:00:05.702) 0:04:44.439 ******* 2026-03-18 02:13:22.954672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:13:22.954689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:13:22.954702 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:22.954715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:13:22.954744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:13:22.954799 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:22.954929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:13:22.954945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:13:22.954957 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:22.954968 | orchestrator | 2026-03-18 02:13:22.954979 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-18 02:13:22.954991 | orchestrator | Wednesday 18 March 2026 02:13:21 +0000 (0:00:01.137) 0:04:45.577 ******* 2026-03-18 02:13:22.955003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-18 02:13:22.955016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:22.955030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:22.955042 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:22.955056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-18 02:13:22.955088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:22.955101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:22.955114 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:22.955127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-18 02:13:22.955140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:22.955167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-18 02:13:29.352259 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:29.352339 | orchestrator | 2026-03-18 02:13:29.352348 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-18 02:13:29.352355 | orchestrator | Wednesday 18 March 2026 02:13:22 +0000 (0:00:00.931) 0:04:46.508 ******* 2026-03-18 02:13:29.352360 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:29.352365 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:29.352370 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:29.352375 | orchestrator | 2026-03-18 02:13:29.352380 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-18 02:13:29.352385 | orchestrator | Wednesday 18 March 2026 02:13:23 +0000 (0:00:00.449) 0:04:46.957 ******* 2026-03-18 02:13:29.352390 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:29.352395 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:29.352400 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:29.352404 | orchestrator | 2026-03-18 02:13:29.352409 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-18 02:13:29.352414 | orchestrator | Wednesday 18 March 2026 02:13:24 +0000 (0:00:01.577) 0:04:48.535 ******* 2026-03-18 02:13:29.352419 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:13:29.352424 | orchestrator | 2026-03-18 02:13:29.352429 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-18 02:13:29.352434 | orchestrator | Wednesday 18 March 2026 02:13:26 +0000 (0:00:01.881) 0:04:50.417 ******* 2026-03-18 02:13:29.352440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 02:13:29.352449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:29.352474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:29.352493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:29.352500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:29.352517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 02:13:29.352523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:29.352528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:29.352533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 02:13:29.352546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:29.352551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:29.352557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:29.352567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.022474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.022568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.022608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 02:13:31.022636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:31.022647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.022657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.022683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.022693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 02:13:31.022716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 02:13:31.022727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:31.022743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:31.806193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.806382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.806404 | orchestrator | 2026-03-18 02:13:31.806416 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-18 02:13:31.806428 | orchestrator | Wednesday 18 March 2026 02:13:31 +0000 (0:00:04.322) 0:04:54.739 ******* 2026-03-18 02:13:31.806439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-18 02:13:31.806470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:31.806489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.806510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.806528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-18 02:13:31.806541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-18 02:13:31.806559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:31.937788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:31.937977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.937994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.938089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.938104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.938115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.938128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:31.938162 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:31.938199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-18 02:13:31.938214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:31.938232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-18 02:13:31.938245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:31.938257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 02:13:31.938283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:33.836962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:33.837072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:33.837087 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:33.837098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:33.837123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 02:13:33.837134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-18 02:13:33.837145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-18 02:13:33.837191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:33.837202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 02:13:33.837210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 02:13:33.837217 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:33.837227 | orchestrator | 2026-03-18 02:13:33.837240 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-18 02:13:33.837253 | orchestrator | Wednesday 18 March 2026 02:13:32 +0000 (0:00:00.929) 0:04:55.669 ******* 2026-03-18 02:13:33.837266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-18 02:13:33.837287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-18 02:13:33.837302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-18 02:13:33.837314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:33.837327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-18 02:13:33.837340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:33.837362 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:33.837373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:33.837385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:33.837396 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:33.837408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-18 02:13:33.837429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-18 02:13:41.441106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:41.441212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-18 02:13:41.441230 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:41.441243 | orchestrator | 2026-03-18 02:13:41.441255 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-18 02:13:41.441267 | orchestrator | Wednesday 18 March 2026 02:13:33 +0000 (0:00:01.730) 0:04:57.399 ******* 2026-03-18 02:13:41.441278 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:41.441289 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:41.441299 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:41.441310 | orchestrator | 2026-03-18 02:13:41.441321 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-18 02:13:41.441332 | orchestrator | Wednesday 18 March 2026 02:13:34 +0000 (0:00:00.484) 0:04:57.883 ******* 2026-03-18 02:13:41.441342 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:41.441353 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:41.441364 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:41.441374 | orchestrator | 2026-03-18 02:13:41.441385 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-18 02:13:41.441395 | orchestrator | Wednesday 18 March 2026 02:13:35 +0000 (0:00:01.493) 0:04:59.377 ******* 2026-03-18 02:13:41.441406 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:13:41.441416 | orchestrator | 2026-03-18 02:13:41.441427 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-18 02:13:41.441438 | orchestrator | Wednesday 18 March 2026 02:13:37 +0000 (0:00:01.953) 0:05:01.330 ******* 2026-03-18 02:13:41.441451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:13:41.441496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:13:41.441572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:13:41.441588 | orchestrator | 2026-03-18 02:13:41.441599 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-18 02:13:41.441611 | orchestrator | Wednesday 18 March 2026 02:13:39 +0000 (0:00:02.227) 0:05:03.557 ******* 2026-03-18 02:13:41.441623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 02:13:41.441642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 02:13:41.441664 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:41.441677 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:41.441690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 02:13:41.441703 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:41.441715 | orchestrator | 2026-03-18 02:13:41.441728 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-18 02:13:41.441740 | orchestrator | Wednesday 18 March 2026 02:13:40 +0000 (0:00:00.452) 0:05:04.010 ******* 2026-03-18 02:13:41.441754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 02:13:41.441776 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:52.483006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 02:13:52.483101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:52.483114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 02:13:52.483123 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:52.483131 | orchestrator | 2026-03-18 02:13:52.483141 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-18 02:13:52.483150 | orchestrator | Wednesday 18 March 2026 02:13:41 +0000 (0:00:01.003) 0:05:05.014 ******* 2026-03-18 02:13:52.483159 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:52.483167 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:52.483175 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:52.483183 | orchestrator | 2026-03-18 02:13:52.483191 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-18 02:13:52.483199 | orchestrator | Wednesday 18 March 2026 02:13:41 +0000 (0:00:00.477) 0:05:05.491 ******* 2026-03-18 02:13:52.483207 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:52.483215 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:52.483223 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:52.483255 | orchestrator | 2026-03-18 02:13:52.483264 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-18 02:13:52.483271 | orchestrator | Wednesday 18 March 2026 02:13:43 +0000 (0:00:01.475) 0:05:06.967 ******* 2026-03-18 02:13:52.483279 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:13:52.483287 | orchestrator | 2026-03-18 02:13:52.483296 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-18 02:13:52.483304 | orchestrator | Wednesday 18 March 2026 02:13:44 +0000 (0:00:01.616) 0:05:08.584 ******* 2026-03-18 02:13:52.483330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 02:13:52.483411 | orchestrator | 2026-03-18 02:13:52.483419 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-18 02:13:52.483427 | orchestrator | Wednesday 18 March 2026 02:13:51 +0000 (0:00:06.730) 0:05:15.314 ******* 2026-03-18 02:13:52.483436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 02:13:52.483449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 02:13:58.727327 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:58.727459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 02:13:58.727480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 02:13:58.727491 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:58.727501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 02:13:58.727510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 02:13:58.727541 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:58.727562 | orchestrator | 2026-03-18 02:13:58.727574 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-18 02:13:58.727585 | orchestrator | Wednesday 18 March 2026 02:13:52 +0000 (0:00:00.735) 0:05:16.049 ******* 2026-03-18 02:13:58.727610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727657 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:58.727666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727703 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:58.727712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-18 02:13:58.727749 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:58.727758 | orchestrator | 2026-03-18 02:13:58.727767 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-18 02:13:58.727783 | orchestrator | Wednesday 18 March 2026 02:13:53 +0000 (0:00:01.077) 0:05:17.127 ******* 2026-03-18 02:13:58.727792 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:13:58.727802 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:13:58.727818 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:13:58.727827 | orchestrator | 2026-03-18 02:13:58.727876 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-18 02:13:58.727885 | orchestrator | Wednesday 18 March 2026 02:13:54 +0000 (0:00:01.344) 0:05:18.472 ******* 2026-03-18 02:13:58.727894 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:13:58.727902 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:13:58.727911 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:13:58.727919 | orchestrator | 2026-03-18 02:13:58.727928 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-18 02:13:58.727936 | orchestrator | Wednesday 18 March 2026 02:13:57 +0000 (0:00:02.306) 0:05:20.778 ******* 2026-03-18 02:13:58.727945 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:58.727954 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:58.727962 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:58.727971 | orchestrator | 2026-03-18 02:13:58.727979 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-18 02:13:58.727988 | orchestrator | Wednesday 18 March 2026 02:13:57 +0000 (0:00:00.691) 0:05:21.470 ******* 2026-03-18 02:13:58.727996 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:58.728004 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:13:58.728013 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:13:58.728021 | orchestrator | 2026-03-18 02:13:58.728030 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-18 02:13:58.728038 | orchestrator | Wednesday 18 March 2026 02:13:58 +0000 (0:00:00.393) 0:05:21.863 ******* 2026-03-18 02:13:58.728048 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:13:58.728062 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.795806 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.796025 | orchestrator | 2026-03-18 02:14:43.796052 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-18 02:14:43.796071 | orchestrator | Wednesday 18 March 2026 02:13:58 +0000 (0:00:00.439) 0:05:22.303 ******* 2026-03-18 02:14:43.796087 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.796103 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.796118 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.796132 | orchestrator | 2026-03-18 02:14:43.796146 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-18 02:14:43.796160 | orchestrator | Wednesday 18 March 2026 02:13:59 +0000 (0:00:00.385) 0:05:22.688 ******* 2026-03-18 02:14:43.796173 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.796187 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.796200 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.796213 | orchestrator | 2026-03-18 02:14:43.796226 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-18 02:14:43.796239 | orchestrator | Wednesday 18 March 2026 02:13:59 +0000 (0:00:00.687) 0:05:23.375 ******* 2026-03-18 02:14:43.796253 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.796267 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.796298 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.796312 | orchestrator | 2026-03-18 02:14:43.796326 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-18 02:14:43.796341 | orchestrator | Wednesday 18 March 2026 02:14:00 +0000 (0:00:00.583) 0:05:23.959 ******* 2026-03-18 02:14:43.796358 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.796375 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.796391 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.796406 | orchestrator | 2026-03-18 02:14:43.796423 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-18 02:14:43.796438 | orchestrator | Wednesday 18 March 2026 02:14:01 +0000 (0:00:00.705) 0:05:24.665 ******* 2026-03-18 02:14:43.796483 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.796500 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.796516 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.796534 | orchestrator | 2026-03-18 02:14:43.796551 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-18 02:14:43.796567 | orchestrator | Wednesday 18 March 2026 02:14:01 +0000 (0:00:00.714) 0:05:25.380 ******* 2026-03-18 02:14:43.796584 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.796602 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.796618 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.796636 | orchestrator | 2026-03-18 02:14:43.796652 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-18 02:14:43.796667 | orchestrator | Wednesday 18 March 2026 02:14:02 +0000 (0:00:00.901) 0:05:26.281 ******* 2026-03-18 02:14:43.796683 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.796698 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.796714 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.796731 | orchestrator | 2026-03-18 02:14:43.796747 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-18 02:14:43.796763 | orchestrator | Wednesday 18 March 2026 02:14:03 +0000 (0:00:00.876) 0:05:27.158 ******* 2026-03-18 02:14:43.796779 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.796793 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.796808 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.796822 | orchestrator | 2026-03-18 02:14:43.796837 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-18 02:14:43.796852 | orchestrator | Wednesday 18 March 2026 02:14:04 +0000 (0:00:00.889) 0:05:28.047 ******* 2026-03-18 02:14:43.796921 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:14:43.796937 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:14:43.796951 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:14:43.796964 | orchestrator | 2026-03-18 02:14:43.796977 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-18 02:14:43.796991 | orchestrator | Wednesday 18 March 2026 02:14:09 +0000 (0:00:04.679) 0:05:32.727 ******* 2026-03-18 02:14:43.797005 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.797018 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.797031 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.797044 | orchestrator | 2026-03-18 02:14:43.797058 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-18 02:14:43.797071 | orchestrator | Wednesday 18 March 2026 02:14:12 +0000 (0:00:03.198) 0:05:35.926 ******* 2026-03-18 02:14:43.797086 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:14:43.797100 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:14:43.797114 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:14:43.797129 | orchestrator | 2026-03-18 02:14:43.797143 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-18 02:14:43.797157 | orchestrator | Wednesday 18 March 2026 02:14:28 +0000 (0:00:16.461) 0:05:52.388 ******* 2026-03-18 02:14:43.797171 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.797186 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.797201 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.797214 | orchestrator | 2026-03-18 02:14:43.797228 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-18 02:14:43.797245 | orchestrator | Wednesday 18 March 2026 02:14:29 +0000 (0:00:00.781) 0:05:53.170 ******* 2026-03-18 02:14:43.797260 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:14:43.797274 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:14:43.797290 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:14:43.797306 | orchestrator | 2026-03-18 02:14:43.797321 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-18 02:14:43.797336 | orchestrator | Wednesday 18 March 2026 02:14:34 +0000 (0:00:04.601) 0:05:57.771 ******* 2026-03-18 02:14:43.797350 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797365 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.797404 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.797422 | orchestrator | 2026-03-18 02:14:43.797438 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-18 02:14:43.797453 | orchestrator | Wednesday 18 March 2026 02:14:34 +0000 (0:00:00.755) 0:05:58.527 ******* 2026-03-18 02:14:43.797468 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797483 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.797500 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.797515 | orchestrator | 2026-03-18 02:14:43.797560 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-18 02:14:43.797578 | orchestrator | Wednesday 18 March 2026 02:14:35 +0000 (0:00:00.367) 0:05:58.894 ******* 2026-03-18 02:14:43.797593 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797609 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.797625 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.797640 | orchestrator | 2026-03-18 02:14:43.797655 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-18 02:14:43.797670 | orchestrator | Wednesday 18 March 2026 02:14:35 +0000 (0:00:00.380) 0:05:59.274 ******* 2026-03-18 02:14:43.797684 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797700 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.797715 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.797730 | orchestrator | 2026-03-18 02:14:43.797745 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-18 02:14:43.797759 | orchestrator | Wednesday 18 March 2026 02:14:36 +0000 (0:00:00.405) 0:05:59.680 ******* 2026-03-18 02:14:43.797773 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797787 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.797802 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.797816 | orchestrator | 2026-03-18 02:14:43.797844 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-18 02:14:43.797860 | orchestrator | Wednesday 18 March 2026 02:14:36 +0000 (0:00:00.739) 0:06:00.419 ******* 2026-03-18 02:14:43.797902 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:43.797916 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:43.798128 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:43.798147 | orchestrator | 2026-03-18 02:14:43.798172 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-18 02:14:43.798189 | orchestrator | Wednesday 18 March 2026 02:14:37 +0000 (0:00:00.379) 0:06:00.799 ******* 2026-03-18 02:14:43.798205 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.798220 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.798236 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.798251 | orchestrator | 2026-03-18 02:14:43.798267 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-18 02:14:43.798283 | orchestrator | Wednesday 18 March 2026 02:14:42 +0000 (0:00:04.820) 0:06:05.620 ******* 2026-03-18 02:14:43.798298 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:43.798313 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:43.798327 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:43.798340 | orchestrator | 2026-03-18 02:14:43.798355 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:14:43.798371 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-18 02:14:43.798389 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-18 02:14:43.798405 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-18 02:14:43.798421 | orchestrator | 2026-03-18 02:14:43.798437 | orchestrator | 2026-03-18 02:14:43.798453 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:14:43.798487 | orchestrator | Wednesday 18 March 2026 02:14:42 +0000 (0:00:00.833) 0:06:06.454 ******* 2026-03-18 02:14:43.798503 | orchestrator | =============================================================================== 2026-03-18 02:14:43.798519 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 16.46s 2026-03-18 02:14:43.798535 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.73s 2026-03-18 02:14:43.798550 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.70s 2026-03-18 02:14:43.798566 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.82s 2026-03-18 02:14:43.798581 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.68s 2026-03-18 02:14:43.798598 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.60s 2026-03-18 02:14:43.798612 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.45s 2026-03-18 02:14:43.798625 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.43s 2026-03-18 02:14:43.798639 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.39s 2026-03-18 02:14:43.798653 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.32s 2026-03-18 02:14:43.798667 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.15s 2026-03-18 02:14:43.798681 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.00s 2026-03-18 02:14:43.798696 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.89s 2026-03-18 02:14:43.798710 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.77s 2026-03-18 02:14:43.798726 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.60s 2026-03-18 02:14:43.798742 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.60s 2026-03-18 02:14:43.798758 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.56s 2026-03-18 02:14:43.798774 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.51s 2026-03-18 02:14:43.798790 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.44s 2026-03-18 02:14:43.798807 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.38s 2026-03-18 02:14:46.327327 | orchestrator | 2026-03-18 02:14:46 | INFO  | Task 7e2a09e4-1bf3-404b-936f-844d92011f4f (opensearch) was prepared for execution. 2026-03-18 02:14:46.327434 | orchestrator | 2026-03-18 02:14:46 | INFO  | It takes a moment until task 7e2a09e4-1bf3-404b-936f-844d92011f4f (opensearch) has been started and output is visible here. 2026-03-18 02:14:57.622215 | orchestrator | 2026-03-18 02:14:57.622321 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:14:57.622336 | orchestrator | 2026-03-18 02:14:57.622353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:14:57.622370 | orchestrator | Wednesday 18 March 2026 02:14:50 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-03-18 02:14:57.622384 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:14:57.622401 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:14:57.622416 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:14:57.622431 | orchestrator | 2026-03-18 02:14:57.622444 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:14:57.622460 | orchestrator | Wednesday 18 March 2026 02:14:51 +0000 (0:00:00.322) 0:00:00.585 ******* 2026-03-18 02:14:57.622476 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-18 02:14:57.622511 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-18 02:14:57.622526 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-18 02:14:57.622543 | orchestrator | 2026-03-18 02:14:57.622559 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-18 02:14:57.622576 | orchestrator | 2026-03-18 02:14:57.622590 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 02:14:57.622622 | orchestrator | Wednesday 18 March 2026 02:14:51 +0000 (0:00:00.447) 0:00:01.033 ******* 2026-03-18 02:14:57.622632 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:14:57.622641 | orchestrator | 2026-03-18 02:14:57.622649 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-18 02:14:57.622658 | orchestrator | Wednesday 18 March 2026 02:14:52 +0000 (0:00:00.531) 0:00:01.564 ******* 2026-03-18 02:14:57.622667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 02:14:57.622675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 02:14:57.622684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 02:14:57.622693 | orchestrator | 2026-03-18 02:14:57.622702 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-18 02:14:57.622710 | orchestrator | Wednesday 18 March 2026 02:14:52 +0000 (0:00:00.684) 0:00:02.248 ******* 2026-03-18 02:14:57.622723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:57.622736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:57.622764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:57.622783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:57.622804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:57.622816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:57.622827 | orchestrator | 2026-03-18 02:14:57.622837 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 02:14:57.622847 | orchestrator | Wednesday 18 March 2026 02:14:54 +0000 (0:00:01.677) 0:00:03.926 ******* 2026-03-18 02:14:57.622858 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:14:57.622868 | orchestrator | 2026-03-18 02:14:57.622933 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-18 02:14:57.622944 | orchestrator | Wednesday 18 March 2026 02:14:54 +0000 (0:00:00.568) 0:00:04.494 ******* 2026-03-18 02:14:57.622963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:58.509633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:58.509769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:14:58.509791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:58.509807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:58.509917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:14:58.509944 | orchestrator | 2026-03-18 02:14:58.509965 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-18 02:14:58.509985 | orchestrator | Wednesday 18 March 2026 02:14:57 +0000 (0:00:02.614) 0:00:07.109 ******* 2026-03-18 02:14:58.510007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:58.510094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:58.510110 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:58.510125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:58.510170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:59.624229 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:59.624316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:59.624332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:59.624342 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:59.624351 | orchestrator | 2026-03-18 02:14:59.624360 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-18 02:14:59.624369 | orchestrator | Wednesday 18 March 2026 02:14:58 +0000 (0:00:00.889) 0:00:07.998 ******* 2026-03-18 02:14:59.624378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:59.624419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:59.624441 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:14:59.624450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:59.624459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:59.624467 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:14:59.624476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-18 02:14:59.624494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-18 02:14:59.624503 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:14:59.624511 | orchestrator | 2026-03-18 02:14:59.624519 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-18 02:14:59.624538 | orchestrator | Wednesday 18 March 2026 02:14:59 +0000 (0:00:01.112) 0:00:09.111 ******* 2026-03-18 02:15:08.028390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:15:08.028502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:15:08.028518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:15:08.028570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:15:08.028601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:15:08.028615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:15:08.028626 | orchestrator | 2026-03-18 02:15:08.028638 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-18 02:15:08.028658 | orchestrator | Wednesday 18 March 2026 02:15:01 +0000 (0:00:02.370) 0:00:11.481 ******* 2026-03-18 02:15:08.028668 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:15:08.028679 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:15:08.028688 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:15:08.028722 | orchestrator | 2026-03-18 02:15:08.028757 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-18 02:15:08.028780 | orchestrator | Wednesday 18 March 2026 02:15:04 +0000 (0:00:02.476) 0:00:13.957 ******* 2026-03-18 02:15:08.028795 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:15:08.028811 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:15:08.028827 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:15:08.028843 | orchestrator | 2026-03-18 02:15:08.028988 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-18 02:15:08.029014 | orchestrator | Wednesday 18 March 2026 02:15:06 +0000 (0:00:01.935) 0:00:15.893 ******* 2026-03-18 02:15:08.029033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:15:08.029064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:15:08.029098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-18 02:17:51.795778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:17:51.795935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:17:51.795962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-18 02:17:51.795971 | orchestrator | 2026-03-18 02:17:51.795979 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 02:17:51.795988 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:01.626) 0:00:17.520 ******* 2026-03-18 02:17:51.796040 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:17:51.796048 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:17:51.796055 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:17:51.796061 | orchestrator | 2026-03-18 02:17:51.796068 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 02:17:51.796085 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:00.316) 0:00:17.836 ******* 2026-03-18 02:17:51.796101 | orchestrator | 2026-03-18 02:17:51.796108 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 02:17:51.796115 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:00.062) 0:00:17.899 ******* 2026-03-18 02:17:51.796121 | orchestrator | 2026-03-18 02:17:51.796128 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 02:17:51.796134 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:00.068) 0:00:17.967 ******* 2026-03-18 02:17:51.796149 | orchestrator | 2026-03-18 02:17:51.796155 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-18 02:17:51.796180 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:00.068) 0:00:18.035 ******* 2026-03-18 02:17:51.796187 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:17:51.796194 | orchestrator | 2026-03-18 02:17:51.796200 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-18 02:17:51.796207 | orchestrator | Wednesday 18 March 2026 02:15:08 +0000 (0:00:00.241) 0:00:18.277 ******* 2026-03-18 02:17:51.796213 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:17:51.796220 | orchestrator | 2026-03-18 02:17:51.796226 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-18 02:17:51.796233 | orchestrator | Wednesday 18 March 2026 02:15:09 +0000 (0:00:00.668) 0:00:18.945 ******* 2026-03-18 02:17:51.796239 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:17:51.796246 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:17:51.796252 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:17:51.796259 | orchestrator | 2026-03-18 02:17:51.796265 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-18 02:17:51.796272 | orchestrator | Wednesday 18 March 2026 02:16:13 +0000 (0:01:04.033) 0:01:22.978 ******* 2026-03-18 02:17:51.796278 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:17:51.796285 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:17:51.796291 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:17:51.796298 | orchestrator | 2026-03-18 02:17:51.796304 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 02:17:51.796311 | orchestrator | Wednesday 18 March 2026 02:17:41 +0000 (0:01:27.772) 0:02:50.751 ******* 2026-03-18 02:17:51.796319 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:17:51.796326 | orchestrator | 2026-03-18 02:17:51.796332 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-18 02:17:51.796339 | orchestrator | Wednesday 18 March 2026 02:17:41 +0000 (0:00:00.520) 0:02:51.271 ******* 2026-03-18 02:17:51.796345 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:17:51.796352 | orchestrator | 2026-03-18 02:17:51.796359 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-18 02:17:51.796365 | orchestrator | Wednesday 18 March 2026 02:17:44 +0000 (0:00:02.800) 0:02:54.071 ******* 2026-03-18 02:17:51.796372 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:17:51.796378 | orchestrator | 2026-03-18 02:17:51.796385 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-18 02:17:51.796391 | orchestrator | Wednesday 18 March 2026 02:17:46 +0000 (0:00:02.168) 0:02:56.240 ******* 2026-03-18 02:17:51.796398 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:17:51.796404 | orchestrator | 2026-03-18 02:17:51.796411 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-18 02:17:51.796417 | orchestrator | Wednesday 18 March 2026 02:17:49 +0000 (0:00:02.643) 0:02:58.883 ******* 2026-03-18 02:17:51.796424 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:17:51.796431 | orchestrator | 2026-03-18 02:17:51.796437 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:17:51.796445 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:17:51.796453 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 02:17:51.796460 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 02:17:51.796467 | orchestrator | 2026-03-18 02:17:51.796474 | orchestrator | 2026-03-18 02:17:51.796481 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:17:51.796492 | orchestrator | Wednesday 18 March 2026 02:17:51 +0000 (0:00:02.381) 0:03:01.265 ******* 2026-03-18 02:17:51.796505 | orchestrator | =============================================================================== 2026-03-18 02:17:51.796512 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 87.77s 2026-03-18 02:17:51.796519 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.03s 2026-03-18 02:17:51.796525 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.80s 2026-03-18 02:17:51.796531 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.64s 2026-03-18 02:17:51.796538 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.61s 2026-03-18 02:17:51.796544 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2026-03-18 02:17:51.796551 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.38s 2026-03-18 02:17:51.796557 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.37s 2026-03-18 02:17:51.796564 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2026-03-18 02:17:51.796570 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.94s 2026-03-18 02:17:51.796577 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.68s 2026-03-18 02:17:51.796583 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.63s 2026-03-18 02:17:51.796590 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.11s 2026-03-18 02:17:51.796596 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.89s 2026-03-18 02:17:51.796603 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-03-18 02:17:51.796610 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2026-03-18 02:17:51.796621 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-18 02:17:52.195740 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-03-18 02:17:52.195852 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-18 02:17:52.195859 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-03-18 02:17:54.772240 | orchestrator | 2026-03-18 02:17:54 | INFO  | Task 66d719b8-abe6-410d-b46c-d1c9af1b4c53 (memcached) was prepared for execution. 2026-03-18 02:17:54.772359 | orchestrator | 2026-03-18 02:17:54 | INFO  | It takes a moment until task 66d719b8-abe6-410d-b46c-d1c9af1b4c53 (memcached) has been started and output is visible here. 2026-03-18 02:18:07.276104 | orchestrator | 2026-03-18 02:18:07.276261 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:18:07.276290 | orchestrator | 2026-03-18 02:18:07.276310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:18:07.276329 | orchestrator | Wednesday 18 March 2026 02:17:59 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-03-18 02:18:07.276350 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:18:07.276370 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:18:07.276391 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:18:07.276410 | orchestrator | 2026-03-18 02:18:07.276430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:18:07.276451 | orchestrator | Wednesday 18 March 2026 02:17:59 +0000 (0:00:00.370) 0:00:00.642 ******* 2026-03-18 02:18:07.276472 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-18 02:18:07.276491 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-18 02:18:07.276504 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-18 02:18:07.276518 | orchestrator | 2026-03-18 02:18:07.276532 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-18 02:18:07.276546 | orchestrator | 2026-03-18 02:18:07.276558 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-18 02:18:07.276603 | orchestrator | Wednesday 18 March 2026 02:18:00 +0000 (0:00:00.450) 0:00:01.093 ******* 2026-03-18 02:18:07.276616 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:18:07.276630 | orchestrator | 2026-03-18 02:18:07.276643 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-18 02:18:07.276658 | orchestrator | Wednesday 18 March 2026 02:18:00 +0000 (0:00:00.509) 0:00:01.602 ******* 2026-03-18 02:18:07.276676 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-18 02:18:07.276695 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-18 02:18:07.276713 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-18 02:18:07.276732 | orchestrator | 2026-03-18 02:18:07.276751 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-18 02:18:07.276771 | orchestrator | Wednesday 18 March 2026 02:18:01 +0000 (0:00:00.702) 0:00:02.305 ******* 2026-03-18 02:18:07.276791 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-18 02:18:07.276811 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-18 02:18:07.276832 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-18 02:18:07.276852 | orchestrator | 2026-03-18 02:18:07.276869 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-18 02:18:07.276884 | orchestrator | Wednesday 18 March 2026 02:18:03 +0000 (0:00:01.913) 0:00:04.219 ******* 2026-03-18 02:18:07.276903 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:18:07.276921 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:07.276938 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:18:07.276954 | orchestrator | 2026-03-18 02:18:07.276994 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-18 02:18:07.277048 | orchestrator | Wednesday 18 March 2026 02:18:04 +0000 (0:00:01.538) 0:00:05.757 ******* 2026-03-18 02:18:07.277068 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:07.277085 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:18:07.277103 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:18:07.277120 | orchestrator | 2026-03-18 02:18:07.277140 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:18:07.277160 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:07.277181 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:07.277200 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:07.277217 | orchestrator | 2026-03-18 02:18:07.277237 | orchestrator | 2026-03-18 02:18:07.277257 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:18:07.277275 | orchestrator | Wednesday 18 March 2026 02:18:06 +0000 (0:00:02.086) 0:00:07.844 ******* 2026-03-18 02:18:07.277289 | orchestrator | =============================================================================== 2026-03-18 02:18:07.277300 | orchestrator | memcached : Restart memcached container --------------------------------- 2.09s 2026-03-18 02:18:07.277311 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.91s 2026-03-18 02:18:07.277322 | orchestrator | memcached : Check memcached container ----------------------------------- 1.54s 2026-03-18 02:18:07.277333 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2026-03-18 02:18:07.277344 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.51s 2026-03-18 02:18:07.277355 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-03-18 02:18:07.277366 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-03-18 02:18:09.769815 | orchestrator | 2026-03-18 02:18:09 | INFO  | Task b630429c-d097-4e3d-b60e-cd120cb6ff49 (redis) was prepared for execution. 2026-03-18 02:18:09.769907 | orchestrator | 2026-03-18 02:18:09 | INFO  | It takes a moment until task b630429c-d097-4e3d-b60e-cd120cb6ff49 (redis) has been started and output is visible here. 2026-03-18 02:18:19.026162 | orchestrator | 2026-03-18 02:18:19.026270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:18:19.026288 | orchestrator | 2026-03-18 02:18:19.026299 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:18:19.026309 | orchestrator | Wednesday 18 March 2026 02:18:14 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-03-18 02:18:19.026318 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:18:19.026329 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:18:19.026339 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:18:19.026349 | orchestrator | 2026-03-18 02:18:19.026358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:18:19.026367 | orchestrator | Wednesday 18 March 2026 02:18:14 +0000 (0:00:00.311) 0:00:00.623 ******* 2026-03-18 02:18:19.026376 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-18 02:18:19.026387 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-18 02:18:19.026397 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-18 02:18:19.026407 | orchestrator | 2026-03-18 02:18:19.026416 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-18 02:18:19.026425 | orchestrator | 2026-03-18 02:18:19.026434 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-18 02:18:19.026443 | orchestrator | Wednesday 18 March 2026 02:18:15 +0000 (0:00:00.424) 0:00:01.048 ******* 2026-03-18 02:18:19.026452 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:18:19.026459 | orchestrator | 2026-03-18 02:18:19.026464 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-18 02:18:19.026470 | orchestrator | Wednesday 18 March 2026 02:18:15 +0000 (0:00:00.520) 0:00:01.568 ******* 2026-03-18 02:18:19.026480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026561 | orchestrator | 2026-03-18 02:18:19.026567 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-18 02:18:19.026573 | orchestrator | Wednesday 18 March 2026 02:18:16 +0000 (0:00:01.072) 0:00:02.640 ******* 2026-03-18 02:18:19.026579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:19.026690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207636 | orchestrator | 2026-03-18 02:18:23.207652 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-18 02:18:23.207665 | orchestrator | Wednesday 18 March 2026 02:18:19 +0000 (0:00:02.407) 0:00:05.048 ******* 2026-03-18 02:18:23.207677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207797 | orchestrator | 2026-03-18 02:18:23.207807 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-18 02:18:23.207822 | orchestrator | Wednesday 18 March 2026 02:18:21 +0000 (0:00:02.394) 0:00:07.442 ******* 2026-03-18 02:18:23.207840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:23.207961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 02:18:38.358894 | orchestrator | 2026-03-18 02:18:38.359013 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 02:18:38.359087 | orchestrator | Wednesday 18 March 2026 02:18:22 +0000 (0:00:01.469) 0:00:08.912 ******* 2026-03-18 02:18:38.359100 | orchestrator | 2026-03-18 02:18:38.359111 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 02:18:38.359122 | orchestrator | Wednesday 18 March 2026 02:18:22 +0000 (0:00:00.088) 0:00:09.000 ******* 2026-03-18 02:18:38.359133 | orchestrator | 2026-03-18 02:18:38.359144 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 02:18:38.359155 | orchestrator | Wednesday 18 March 2026 02:18:23 +0000 (0:00:00.095) 0:00:09.096 ******* 2026-03-18 02:18:38.359166 | orchestrator | 2026-03-18 02:18:38.359177 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-18 02:18:38.359187 | orchestrator | Wednesday 18 March 2026 02:18:23 +0000 (0:00:00.130) 0:00:09.227 ******* 2026-03-18 02:18:38.359199 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:38.359211 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:18:38.359222 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:18:38.359233 | orchestrator | 2026-03-18 02:18:38.359244 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-18 02:18:38.359255 | orchestrator | Wednesday 18 March 2026 02:18:30 +0000 (0:00:07.799) 0:00:17.026 ******* 2026-03-18 02:18:38.359265 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:18:38.359277 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:18:38.359316 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:38.359327 | orchestrator | 2026-03-18 02:18:38.359339 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:18:38.359350 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:38.359363 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:38.359373 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:18:38.359384 | orchestrator | 2026-03-18 02:18:38.359395 | orchestrator | 2026-03-18 02:18:38.359421 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:18:38.359434 | orchestrator | Wednesday 18 March 2026 02:18:37 +0000 (0:00:06.998) 0:00:24.025 ******* 2026-03-18 02:18:38.359447 | orchestrator | =============================================================================== 2026-03-18 02:18:38.359460 | orchestrator | redis : Restart redis container ----------------------------------------- 7.80s 2026-03-18 02:18:38.359472 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.00s 2026-03-18 02:18:38.359485 | orchestrator | redis : Copying over default config.json files -------------------------- 2.41s 2026-03-18 02:18:38.359497 | orchestrator | redis : Copying over redis config files --------------------------------- 2.39s 2026-03-18 02:18:38.359509 | orchestrator | redis : Check redis containers ------------------------------------------ 1.47s 2026-03-18 02:18:38.359522 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.07s 2026-03-18 02:18:38.359533 | orchestrator | redis : include_tasks --------------------------------------------------- 0.52s 2026-03-18 02:18:38.359546 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-18 02:18:38.359558 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.31s 2026-03-18 02:18:38.359570 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-18 02:18:40.892769 | orchestrator | 2026-03-18 02:18:40 | INFO  | Task b00e2eef-5f38-4053-a6c5-be1d1b11bc2b (mariadb) was prepared for execution. 2026-03-18 02:18:40.892852 | orchestrator | 2026-03-18 02:18:40 | INFO  | It takes a moment until task b00e2eef-5f38-4053-a6c5-be1d1b11bc2b (mariadb) has been started and output is visible here. 2026-03-18 02:18:55.863844 | orchestrator | 2026-03-18 02:18:55.863985 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:18:55.864015 | orchestrator | 2026-03-18 02:18:55.864074 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:18:55.864092 | orchestrator | Wednesday 18 March 2026 02:18:45 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-03-18 02:18:55.864108 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:18:55.864126 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:18:55.864142 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:18:55.864158 | orchestrator | 2026-03-18 02:18:55.864174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:18:55.864190 | orchestrator | Wednesday 18 March 2026 02:18:45 +0000 (0:00:00.342) 0:00:00.595 ******* 2026-03-18 02:18:55.864209 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-18 02:18:55.864227 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-18 02:18:55.864244 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-18 02:18:55.864260 | orchestrator | 2026-03-18 02:18:55.864276 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-18 02:18:55.864291 | orchestrator | 2026-03-18 02:18:55.864307 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-18 02:18:55.864322 | orchestrator | Wednesday 18 March 2026 02:18:46 +0000 (0:00:00.572) 0:00:01.168 ******* 2026-03-18 02:18:55.864374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 02:18:55.864391 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 02:18:55.864410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 02:18:55.864427 | orchestrator | 2026-03-18 02:18:55.864444 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 02:18:55.864462 | orchestrator | Wednesday 18 March 2026 02:18:46 +0000 (0:00:00.422) 0:00:01.590 ******* 2026-03-18 02:18:55.864478 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:18:55.864496 | orchestrator | 2026-03-18 02:18:55.864513 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-18 02:18:55.864530 | orchestrator | Wednesday 18 March 2026 02:18:47 +0000 (0:00:00.546) 0:00:02.137 ******* 2026-03-18 02:18:55.864574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:18:55.864630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:18:55.864675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:18:55.864695 | orchestrator | 2026-03-18 02:18:55.864713 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-18 02:18:55.864730 | orchestrator | Wednesday 18 March 2026 02:18:50 +0000 (0:00:03.140) 0:00:05.277 ******* 2026-03-18 02:18:55.864746 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:18:55.864764 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:55.864780 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:18:55.864796 | orchestrator | 2026-03-18 02:18:55.864812 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-18 02:18:55.864827 | orchestrator | Wednesday 18 March 2026 02:18:51 +0000 (0:00:00.646) 0:00:05.924 ******* 2026-03-18 02:18:55.864843 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:18:55.864860 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:18:55.864877 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:18:55.864893 | orchestrator | 2026-03-18 02:18:55.864907 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-18 02:18:55.864922 | orchestrator | Wednesday 18 March 2026 02:18:52 +0000 (0:00:01.471) 0:00:07.396 ******* 2026-03-18 02:18:55.864958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:19:03.890358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:19:03.890470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:19:03.890507 | orchestrator | 2026-03-18 02:19:03.890521 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-18 02:19:03.890532 | orchestrator | Wednesday 18 March 2026 02:18:55 +0000 (0:00:03.175) 0:00:10.572 ******* 2026-03-18 02:19:03.890558 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:19:03.890569 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:19:03.890589 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:19:03.890599 | orchestrator | 2026-03-18 02:19:03.890615 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-18 02:19:03.890650 | orchestrator | Wednesday 18 March 2026 02:18:56 +0000 (0:00:01.073) 0:00:11.645 ******* 2026-03-18 02:19:03.890668 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:19:03.890685 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:19:03.890701 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:19:03.890718 | orchestrator | 2026-03-18 02:19:03.890734 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 02:19:03.890744 | orchestrator | Wednesday 18 March 2026 02:19:00 +0000 (0:00:03.833) 0:00:15.479 ******* 2026-03-18 02:19:03.890754 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:19:03.890765 | orchestrator | 2026-03-18 02:19:03.890774 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-18 02:19:03.890784 | orchestrator | Wednesday 18 March 2026 02:19:01 +0000 (0:00:00.606) 0:00:16.085 ******* 2026-03-18 02:19:03.890803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:03.890824 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:19:03.890843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:08.882360 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:19:08.882485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:08.882532 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:19:08.882545 | orchestrator | 2026-03-18 02:19:08.882558 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-18 02:19:08.882570 | orchestrator | Wednesday 18 March 2026 02:19:03 +0000 (0:00:02.514) 0:00:18.599 ******* 2026-03-18 02:19:08.882583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:08.882595 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:19:08.882633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:08.882656 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:19:08.882668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:08.882680 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:19:08.882691 | orchestrator | 2026-03-18 02:19:08.882703 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-18 02:19:08.882714 | orchestrator | Wednesday 18 March 2026 02:19:06 +0000 (0:00:02.632) 0:00:21.232 ******* 2026-03-18 02:19:08.882741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:11.686893 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:19:11.686979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:11.686995 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:19:11.687069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 02:19:11.687087 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:19:11.687098 | orchestrator | 2026-03-18 02:19:11.687109 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-18 02:19:11.687143 | orchestrator | Wednesday 18 March 2026 02:19:08 +0000 (0:00:02.363) 0:00:23.595 ******* 2026-03-18 02:19:11.687174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:19:11.687186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:19:11.687205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 02:21:30.284153 | orchestrator | 2026-03-18 02:21:30.284267 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-18 02:21:30.284278 | orchestrator | Wednesday 18 March 2026 02:19:11 +0000 (0:00:02.804) 0:00:26.399 ******* 2026-03-18 02:21:30.284286 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:30.284293 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:21:30.284300 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:21:30.284306 | orchestrator | 2026-03-18 02:21:30.284313 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-18 02:21:30.284319 | orchestrator | Wednesday 18 March 2026 02:19:12 +0000 (0:00:00.871) 0:00:27.271 ******* 2026-03-18 02:21:30.284325 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284332 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.284339 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.284345 | orchestrator | 2026-03-18 02:21:30.284351 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-18 02:21:30.284357 | orchestrator | Wednesday 18 March 2026 02:19:13 +0000 (0:00:00.623) 0:00:27.894 ******* 2026-03-18 02:21:30.284363 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284369 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.284375 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.284381 | orchestrator | 2026-03-18 02:21:30.284387 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-18 02:21:30.284393 | orchestrator | Wednesday 18 March 2026 02:19:13 +0000 (0:00:00.359) 0:00:28.253 ******* 2026-03-18 02:21:30.284401 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-18 02:21:30.284408 | orchestrator | ...ignoring 2026-03-18 02:21:30.284415 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-18 02:21:30.284421 | orchestrator | ...ignoring 2026-03-18 02:21:30.284428 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-18 02:21:30.284434 | orchestrator | ...ignoring 2026-03-18 02:21:30.284442 | orchestrator | 2026-03-18 02:21:30.284453 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-18 02:21:30.284484 | orchestrator | Wednesday 18 March 2026 02:19:24 +0000 (0:00:10.861) 0:00:39.115 ******* 2026-03-18 02:21:30.284496 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284507 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.284519 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.284530 | orchestrator | 2026-03-18 02:21:30.284537 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-18 02:21:30.284543 | orchestrator | Wednesday 18 March 2026 02:19:24 +0000 (0:00:00.421) 0:00:39.537 ******* 2026-03-18 02:21:30.284549 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.284554 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284560 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284566 | orchestrator | 2026-03-18 02:21:30.284572 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-18 02:21:30.284577 | orchestrator | Wednesday 18 March 2026 02:19:25 +0000 (0:00:00.749) 0:00:40.286 ******* 2026-03-18 02:21:30.284583 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.284589 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284594 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284600 | orchestrator | 2026-03-18 02:21:30.284606 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-18 02:21:30.284612 | orchestrator | Wednesday 18 March 2026 02:19:25 +0000 (0:00:00.435) 0:00:40.722 ******* 2026-03-18 02:21:30.284630 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.284636 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284642 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284648 | orchestrator | 2026-03-18 02:21:30.284654 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-18 02:21:30.284660 | orchestrator | Wednesday 18 March 2026 02:19:26 +0000 (0:00:00.431) 0:00:41.154 ******* 2026-03-18 02:21:30.284667 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284674 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.284680 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.284686 | orchestrator | 2026-03-18 02:21:30.284693 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-18 02:21:30.284700 | orchestrator | Wednesday 18 March 2026 02:19:26 +0000 (0:00:00.433) 0:00:41.587 ******* 2026-03-18 02:21:30.284707 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.284713 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284723 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284733 | orchestrator | 2026-03-18 02:21:30.284742 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 02:21:30.284753 | orchestrator | Wednesday 18 March 2026 02:19:27 +0000 (0:00:00.794) 0:00:42.382 ******* 2026-03-18 02:21:30.284764 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284773 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284784 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-18 02:21:30.284791 | orchestrator | 2026-03-18 02:21:30.284798 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-18 02:21:30.284804 | orchestrator | Wednesday 18 March 2026 02:19:28 +0000 (0:00:00.388) 0:00:42.770 ******* 2026-03-18 02:21:30.284811 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:30.284817 | orchestrator | 2026-03-18 02:21:30.284824 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-18 02:21:30.284843 | orchestrator | Wednesday 18 March 2026 02:19:38 +0000 (0:00:10.136) 0:00:52.907 ******* 2026-03-18 02:21:30.284850 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284857 | orchestrator | 2026-03-18 02:21:30.284864 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 02:21:30.284870 | orchestrator | Wednesday 18 March 2026 02:19:38 +0000 (0:00:00.130) 0:00:53.037 ******* 2026-03-18 02:21:30.284878 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.284898 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.284905 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.284917 | orchestrator | 2026-03-18 02:21:30.284923 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-18 02:21:30.284929 | orchestrator | Wednesday 18 March 2026 02:19:39 +0000 (0:00:01.037) 0:00:54.074 ******* 2026-03-18 02:21:30.284934 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:30.284940 | orchestrator | 2026-03-18 02:21:30.284946 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-18 02:21:30.284952 | orchestrator | Wednesday 18 March 2026 02:19:47 +0000 (0:00:08.066) 0:01:02.140 ******* 2026-03-18 02:21:30.284957 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284963 | orchestrator | 2026-03-18 02:21:30.284969 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-18 02:21:30.284977 | orchestrator | Wednesday 18 March 2026 02:19:49 +0000 (0:00:01.698) 0:01:03.839 ******* 2026-03-18 02:21:30.284987 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.284997 | orchestrator | 2026-03-18 02:21:30.285008 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-18 02:21:30.285019 | orchestrator | Wednesday 18 March 2026 02:19:51 +0000 (0:00:02.537) 0:01:06.376 ******* 2026-03-18 02:21:30.285029 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:30.285039 | orchestrator | 2026-03-18 02:21:30.285045 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-18 02:21:30.285051 | orchestrator | Wednesday 18 March 2026 02:19:51 +0000 (0:00:00.120) 0:01:06.496 ******* 2026-03-18 02:21:30.285057 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.285083 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:30.285090 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:30.285095 | orchestrator | 2026-03-18 02:21:30.285101 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-18 02:21:30.285107 | orchestrator | Wednesday 18 March 2026 02:19:52 +0000 (0:00:00.330) 0:01:06.827 ******* 2026-03-18 02:21:30.285113 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:30.285118 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-18 02:21:30.285124 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:21:30.285130 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:21:30.285135 | orchestrator | 2026-03-18 02:21:30.285141 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-18 02:21:30.285147 | orchestrator | skipping: no hosts matched 2026-03-18 02:21:30.285152 | orchestrator | 2026-03-18 02:21:30.285158 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-18 02:21:30.285164 | orchestrator | 2026-03-18 02:21:30.285170 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 02:21:30.285175 | orchestrator | Wednesday 18 March 2026 02:19:52 +0000 (0:00:00.583) 0:01:07.411 ******* 2026-03-18 02:21:30.285181 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:21:30.285187 | orchestrator | 2026-03-18 02:21:30.285192 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 02:21:30.285198 | orchestrator | Wednesday 18 March 2026 02:20:11 +0000 (0:00:18.590) 0:01:26.001 ******* 2026-03-18 02:21:30.285204 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.285210 | orchestrator | 2026-03-18 02:21:30.285215 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 02:21:30.285221 | orchestrator | Wednesday 18 March 2026 02:20:27 +0000 (0:00:16.583) 0:01:42.584 ******* 2026-03-18 02:21:30.285228 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:30.285237 | orchestrator | 2026-03-18 02:21:30.285246 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-18 02:21:30.285261 | orchestrator | 2026-03-18 02:21:30.285269 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 02:21:30.285279 | orchestrator | Wednesday 18 March 2026 02:20:30 +0000 (0:00:02.454) 0:01:45.039 ******* 2026-03-18 02:21:30.285285 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:21:30.285291 | orchestrator | 2026-03-18 02:21:30.285297 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 02:21:30.285307 | orchestrator | Wednesday 18 March 2026 02:20:49 +0000 (0:00:18.802) 0:02:03.842 ******* 2026-03-18 02:21:30.285313 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.285319 | orchestrator | 2026-03-18 02:21:30.285325 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 02:21:30.285330 | orchestrator | Wednesday 18 March 2026 02:21:05 +0000 (0:00:16.602) 0:02:20.445 ******* 2026-03-18 02:21:30.285336 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:30.285342 | orchestrator | 2026-03-18 02:21:30.285348 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-18 02:21:30.285353 | orchestrator | 2026-03-18 02:21:30.285359 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 02:21:30.285365 | orchestrator | Wednesday 18 March 2026 02:21:08 +0000 (0:00:02.624) 0:02:23.069 ******* 2026-03-18 02:21:30.285370 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:30.285376 | orchestrator | 2026-03-18 02:21:30.285382 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 02:21:30.285388 | orchestrator | Wednesday 18 March 2026 02:21:21 +0000 (0:00:12.694) 0:02:35.763 ******* 2026-03-18 02:21:30.285393 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.285399 | orchestrator | 2026-03-18 02:21:30.285405 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 02:21:30.285411 | orchestrator | Wednesday 18 March 2026 02:21:26 +0000 (0:00:05.664) 0:02:41.427 ******* 2026-03-18 02:21:30.285416 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:30.285422 | orchestrator | 2026-03-18 02:21:30.285428 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-18 02:21:30.285433 | orchestrator | 2026-03-18 02:21:30.285440 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-18 02:21:30.285449 | orchestrator | Wednesday 18 March 2026 02:21:29 +0000 (0:00:02.808) 0:02:44.236 ******* 2026-03-18 02:21:30.285459 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:21:30.285468 | orchestrator | 2026-03-18 02:21:30.285478 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-18 02:21:30.285495 | orchestrator | Wednesday 18 March 2026 02:21:30 +0000 (0:00:00.753) 0:02:44.990 ******* 2026-03-18 02:21:43.022802 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:43.022890 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:43.022898 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:43.022903 | orchestrator | 2026-03-18 02:21:43.022909 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-18 02:21:43.022915 | orchestrator | Wednesday 18 March 2026 02:21:32 +0000 (0:00:02.264) 0:02:47.254 ******* 2026-03-18 02:21:43.022920 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:43.022924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:43.022929 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:43.022933 | orchestrator | 2026-03-18 02:21:43.022938 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-18 02:21:43.022943 | orchestrator | Wednesday 18 March 2026 02:21:34 +0000 (0:00:02.055) 0:02:49.310 ******* 2026-03-18 02:21:43.022948 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:43.022952 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:43.022957 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:43.022961 | orchestrator | 2026-03-18 02:21:43.022965 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-18 02:21:43.022971 | orchestrator | Wednesday 18 March 2026 02:21:36 +0000 (0:00:02.319) 0:02:51.629 ******* 2026-03-18 02:21:43.022979 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:43.022986 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:43.022993 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:21:43.022999 | orchestrator | 2026-03-18 02:21:43.023006 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-18 02:21:43.023073 | orchestrator | Wednesday 18 March 2026 02:21:39 +0000 (0:00:02.139) 0:02:53.769 ******* 2026-03-18 02:21:43.023083 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:43.023093 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:43.023101 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:43.023109 | orchestrator | 2026-03-18 02:21:43.023114 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-18 02:21:43.023119 | orchestrator | Wednesday 18 March 2026 02:21:42 +0000 (0:00:03.120) 0:02:56.889 ******* 2026-03-18 02:21:43.023123 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:43.023127 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:21:43.023132 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:21:43.023136 | orchestrator | 2026-03-18 02:21:43.023141 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:21:43.023149 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-18 02:21:43.023158 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-18 02:21:43.023165 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-18 02:21:43.023172 | orchestrator | 2026-03-18 02:21:43.023180 | orchestrator | 2026-03-18 02:21:43.023187 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:21:43.023195 | orchestrator | Wednesday 18 March 2026 02:21:42 +0000 (0:00:00.468) 0:02:57.358 ******* 2026-03-18 02:21:43.023201 | orchestrator | =============================================================================== 2026-03-18 02:21:43.023206 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.39s 2026-03-18 02:21:43.023225 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.19s 2026-03-18 02:21:43.023232 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.69s 2026-03-18 02:21:43.023239 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-03-18 02:21:43.023246 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.14s 2026-03-18 02:21:43.023254 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.07s 2026-03-18 02:21:43.023261 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.66s 2026-03-18 02:21:43.023269 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.08s 2026-03-18 02:21:43.023276 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.83s 2026-03-18 02:21:43.023284 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.18s 2026-03-18 02:21:43.023291 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.14s 2026-03-18 02:21:43.023300 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.12s 2026-03-18 02:21:43.023307 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.81s 2026-03-18 02:21:43.023314 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.80s 2026-03-18 02:21:43.023322 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.63s 2026-03-18 02:21:43.023331 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2026-03-18 02:21:43.023339 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.51s 2026-03-18 02:21:43.023346 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.36s 2026-03-18 02:21:43.023353 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.32s 2026-03-18 02:21:43.023361 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.26s 2026-03-18 02:21:45.777737 | orchestrator | 2026-03-18 02:21:45 | INFO  | Task c7d01361-4536-4c8c-be60-f8cd6d1e9c76 (rabbitmq) was prepared for execution. 2026-03-18 02:21:45.777834 | orchestrator | 2026-03-18 02:21:45 | INFO  | It takes a moment until task c7d01361-4536-4c8c-be60-f8cd6d1e9c76 (rabbitmq) has been started and output is visible here. 2026-03-18 02:21:59.972500 | orchestrator | 2026-03-18 02:21:59.972589 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:21:59.972599 | orchestrator | 2026-03-18 02:21:59.972607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:21:59.972614 | orchestrator | Wednesday 18 March 2026 02:21:50 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-03-18 02:21:59.972621 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:59.972629 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:21:59.972636 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:21:59.972643 | orchestrator | 2026-03-18 02:21:59.972649 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:21:59.972656 | orchestrator | Wednesday 18 March 2026 02:21:50 +0000 (0:00:00.332) 0:00:00.521 ******* 2026-03-18 02:21:59.972663 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-18 02:21:59.972671 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-18 02:21:59.972678 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-18 02:21:59.972684 | orchestrator | 2026-03-18 02:21:59.972691 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-18 02:21:59.972698 | orchestrator | 2026-03-18 02:21:59.972706 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 02:21:59.972712 | orchestrator | Wednesday 18 March 2026 02:21:51 +0000 (0:00:00.600) 0:00:01.122 ******* 2026-03-18 02:21:59.972720 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:21:59.972727 | orchestrator | 2026-03-18 02:21:59.972734 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-18 02:21:59.972741 | orchestrator | Wednesday 18 March 2026 02:21:52 +0000 (0:00:00.537) 0:00:01.659 ******* 2026-03-18 02:21:59.972747 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:59.972754 | orchestrator | 2026-03-18 02:21:59.972761 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-18 02:21:59.972767 | orchestrator | Wednesday 18 March 2026 02:21:53 +0000 (0:00:01.003) 0:00:02.662 ******* 2026-03-18 02:21:59.972774 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.972782 | orchestrator | 2026-03-18 02:21:59.972788 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-18 02:21:59.972795 | orchestrator | Wednesday 18 March 2026 02:21:53 +0000 (0:00:00.399) 0:00:03.062 ******* 2026-03-18 02:21:59.972802 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.972808 | orchestrator | 2026-03-18 02:21:59.972815 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-18 02:21:59.972822 | orchestrator | Wednesday 18 March 2026 02:21:53 +0000 (0:00:00.391) 0:00:03.453 ******* 2026-03-18 02:21:59.972828 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.972835 | orchestrator | 2026-03-18 02:21:59.972841 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-18 02:21:59.972848 | orchestrator | Wednesday 18 March 2026 02:21:54 +0000 (0:00:00.446) 0:00:03.900 ******* 2026-03-18 02:21:59.972855 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.972861 | orchestrator | 2026-03-18 02:21:59.972868 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 02:21:59.972875 | orchestrator | Wednesday 18 March 2026 02:21:54 +0000 (0:00:00.694) 0:00:04.594 ******* 2026-03-18 02:21:59.972897 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:21:59.972904 | orchestrator | 2026-03-18 02:21:59.972910 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-18 02:21:59.972934 | orchestrator | Wednesday 18 March 2026 02:21:55 +0000 (0:00:00.906) 0:00:05.501 ******* 2026-03-18 02:21:59.972941 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:21:59.972948 | orchestrator | 2026-03-18 02:21:59.972955 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-18 02:21:59.972961 | orchestrator | Wednesday 18 March 2026 02:21:56 +0000 (0:00:00.800) 0:00:06.302 ******* 2026-03-18 02:21:59.972988 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.972995 | orchestrator | 2026-03-18 02:21:59.973002 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-18 02:21:59.973008 | orchestrator | Wednesday 18 March 2026 02:21:57 +0000 (0:00:00.370) 0:00:06.673 ******* 2026-03-18 02:21:59.973015 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:21:59.973021 | orchestrator | 2026-03-18 02:21:59.973028 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-18 02:21:59.973034 | orchestrator | Wednesday 18 March 2026 02:21:57 +0000 (0:00:00.401) 0:00:07.074 ******* 2026-03-18 02:21:59.973059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:21:59.973071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:21:59.973081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:21:59.973095 | orchestrator | 2026-03-18 02:21:59.973102 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-18 02:21:59.973114 | orchestrator | Wednesday 18 March 2026 02:21:58 +0000 (0:00:00.848) 0:00:07.923 ******* 2026-03-18 02:21:59.973123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:21:59.973137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:22:18.599506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:22:18.600447 | orchestrator | 2026-03-18 02:22:18.600498 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-18 02:22:18.600513 | orchestrator | Wednesday 18 March 2026 02:21:59 +0000 (0:00:01.652) 0:00:09.575 ******* 2026-03-18 02:22:18.600524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 02:22:18.600565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 02:22:18.600578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 02:22:18.600596 | orchestrator | 2026-03-18 02:22:18.600615 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-18 02:22:18.600633 | orchestrator | Wednesday 18 March 2026 02:22:01 +0000 (0:00:01.567) 0:00:11.143 ******* 2026-03-18 02:22:18.600651 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 02:22:18.600668 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 02:22:18.600704 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 02:22:18.600725 | orchestrator | 2026-03-18 02:22:18.600743 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-18 02:22:18.600762 | orchestrator | Wednesday 18 March 2026 02:22:03 +0000 (0:00:01.697) 0:00:12.840 ******* 2026-03-18 02:22:18.600781 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 02:22:18.600799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 02:22:18.600817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 02:22:18.600836 | orchestrator | 2026-03-18 02:22:18.600854 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-18 02:22:18.600872 | orchestrator | Wednesday 18 March 2026 02:22:04 +0000 (0:00:01.374) 0:00:14.215 ******* 2026-03-18 02:22:18.600890 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 02:22:18.600907 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 02:22:18.601058 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 02:22:18.601079 | orchestrator | 2026-03-18 02:22:18.601096 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-18 02:22:18.601115 | orchestrator | Wednesday 18 March 2026 02:22:06 +0000 (0:00:01.728) 0:00:15.943 ******* 2026-03-18 02:22:18.601135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 02:22:18.601153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 02:22:18.601172 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 02:22:18.601191 | orchestrator | 2026-03-18 02:22:18.601208 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-18 02:22:18.601219 | orchestrator | Wednesday 18 March 2026 02:22:07 +0000 (0:00:01.393) 0:00:17.337 ******* 2026-03-18 02:22:18.601230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 02:22:18.601241 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 02:22:18.601252 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 02:22:18.601269 | orchestrator | 2026-03-18 02:22:18.601358 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 02:22:18.601384 | orchestrator | Wednesday 18 March 2026 02:22:09 +0000 (0:00:01.356) 0:00:18.693 ******* 2026-03-18 02:22:18.601404 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:22:18.601425 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:22:18.601474 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:22:18.601487 | orchestrator | 2026-03-18 02:22:18.601498 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-18 02:22:18.601565 | orchestrator | Wednesday 18 March 2026 02:22:09 +0000 (0:00:00.413) 0:00:19.106 ******* 2026-03-18 02:22:18.601582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:22:18.601606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:22:18.601619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 02:22:18.601631 | orchestrator | 2026-03-18 02:22:18.601642 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-18 02:22:18.601653 | orchestrator | Wednesday 18 March 2026 02:22:10 +0000 (0:00:01.249) 0:00:20.356 ******* 2026-03-18 02:22:18.601664 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:22:18.601675 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:22:18.601686 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:22:18.601697 | orchestrator | 2026-03-18 02:22:18.601708 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-18 02:22:18.601719 | orchestrator | Wednesday 18 March 2026 02:22:11 +0000 (0:00:00.824) 0:00:21.181 ******* 2026-03-18 02:22:18.601737 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:22:18.601748 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:22:18.601758 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:22:18.601769 | orchestrator | 2026-03-18 02:22:18.601780 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-18 02:22:18.601799 | orchestrator | Wednesday 18 March 2026 02:22:18 +0000 (0:00:07.018) 0:00:28.200 ******* 2026-03-18 02:23:53.678117 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:23:53.678245 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:23:53.678269 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:23:53.678288 | orchestrator | 2026-03-18 02:23:53.678307 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 02:23:53.678324 | orchestrator | 2026-03-18 02:23:53.678341 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 02:23:53.678358 | orchestrator | Wednesday 18 March 2026 02:22:19 +0000 (0:00:00.591) 0:00:28.792 ******* 2026-03-18 02:23:53.678376 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:23:53.678396 | orchestrator | 2026-03-18 02:23:53.678412 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 02:23:53.678431 | orchestrator | Wednesday 18 March 2026 02:22:19 +0000 (0:00:00.589) 0:00:29.381 ******* 2026-03-18 02:23:53.678449 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:23:53.678467 | orchestrator | 2026-03-18 02:23:53.678483 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 02:23:53.678501 | orchestrator | Wednesday 18 March 2026 02:22:20 +0000 (0:00:00.260) 0:00:29.641 ******* 2026-03-18 02:23:53.678519 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:23:53.678537 | orchestrator | 2026-03-18 02:23:53.678554 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 02:23:53.678578 | orchestrator | Wednesday 18 March 2026 02:22:21 +0000 (0:00:01.594) 0:00:31.236 ******* 2026-03-18 02:23:53.678599 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:23:53.678618 | orchestrator | 2026-03-18 02:23:53.678729 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 02:23:53.678747 | orchestrator | 2026-03-18 02:23:53.678766 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 02:23:53.678786 | orchestrator | Wednesday 18 March 2026 02:23:16 +0000 (0:00:54.761) 0:01:25.997 ******* 2026-03-18 02:23:53.678805 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:23:53.678828 | orchestrator | 2026-03-18 02:23:53.678846 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 02:23:53.678862 | orchestrator | Wednesday 18 March 2026 02:23:16 +0000 (0:00:00.604) 0:01:26.602 ******* 2026-03-18 02:23:53.678879 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:23:53.678897 | orchestrator | 2026-03-18 02:23:53.678913 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 02:23:53.678930 | orchestrator | Wednesday 18 March 2026 02:23:17 +0000 (0:00:00.233) 0:01:26.836 ******* 2026-03-18 02:23:53.678944 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:23:53.678957 | orchestrator | 2026-03-18 02:23:53.678971 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 02:23:53.678982 | orchestrator | Wednesday 18 March 2026 02:23:23 +0000 (0:00:06.633) 0:01:33.469 ******* 2026-03-18 02:23:53.678994 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:23:53.679006 | orchestrator | 2026-03-18 02:23:53.679038 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 02:23:53.679053 | orchestrator | 2026-03-18 02:23:53.679067 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 02:23:53.679082 | orchestrator | Wednesday 18 March 2026 02:23:33 +0000 (0:00:09.503) 0:01:42.972 ******* 2026-03-18 02:23:53.679096 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:23:53.679111 | orchestrator | 2026-03-18 02:23:53.679125 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 02:23:53.679139 | orchestrator | Wednesday 18 March 2026 02:23:34 +0000 (0:00:00.814) 0:01:43.787 ******* 2026-03-18 02:23:53.679181 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:23:53.679196 | orchestrator | 2026-03-18 02:23:53.679210 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 02:23:53.679224 | orchestrator | Wednesday 18 March 2026 02:23:34 +0000 (0:00:00.260) 0:01:44.048 ******* 2026-03-18 02:23:53.679236 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:23:53.679247 | orchestrator | 2026-03-18 02:23:53.679262 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 02:23:53.679274 | orchestrator | Wednesday 18 March 2026 02:23:36 +0000 (0:00:01.623) 0:01:45.672 ******* 2026-03-18 02:23:53.679287 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:23:53.679300 | orchestrator | 2026-03-18 02:23:53.679313 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-18 02:23:53.679327 | orchestrator | 2026-03-18 02:23:53.679341 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-18 02:23:53.679354 | orchestrator | Wednesday 18 March 2026 02:23:50 +0000 (0:00:14.429) 0:02:00.101 ******* 2026-03-18 02:23:53.679368 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:23:53.679383 | orchestrator | 2026-03-18 02:23:53.679397 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-18 02:23:53.679411 | orchestrator | Wednesday 18 March 2026 02:23:51 +0000 (0:00:00.537) 0:02:00.638 ******* 2026-03-18 02:23:53.679425 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-18 02:23:53.679440 | orchestrator | enable_outward_rabbitmq_True 2026-03-18 02:23:53.679454 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-18 02:23:53.679467 | orchestrator | outward_rabbitmq_restart 2026-03-18 02:23:53.679482 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:23:53.679496 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:23:53.679510 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:23:53.679524 | orchestrator | 2026-03-18 02:23:53.679538 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-18 02:23:53.679552 | orchestrator | skipping: no hosts matched 2026-03-18 02:23:53.679566 | orchestrator | 2026-03-18 02:23:53.679580 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-18 02:23:53.679594 | orchestrator | skipping: no hosts matched 2026-03-18 02:23:53.679608 | orchestrator | 2026-03-18 02:23:53.679622 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-18 02:23:53.679636 | orchestrator | skipping: no hosts matched 2026-03-18 02:23:53.679650 | orchestrator | 2026-03-18 02:23:53.679664 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:23:53.679750 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-18 02:23:53.679770 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:23:53.679783 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:23:53.679796 | orchestrator | 2026-03-18 02:23:53.679810 | orchestrator | 2026-03-18 02:23:53.679825 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:23:53.679841 | orchestrator | Wednesday 18 March 2026 02:23:53 +0000 (0:00:02.297) 0:02:02.936 ******* 2026-03-18 02:23:53.679856 | orchestrator | =============================================================================== 2026-03-18 02:23:53.679870 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.69s 2026-03-18 02:23:53.679885 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.85s 2026-03-18 02:23:53.679900 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.02s 2026-03-18 02:23:53.679929 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.30s 2026-03-18 02:23:53.679944 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.01s 2026-03-18 02:23:53.679959 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.73s 2026-03-18 02:23:53.679975 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.70s 2026-03-18 02:23:53.679990 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.65s 2026-03-18 02:23:53.680006 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.57s 2026-03-18 02:23:53.680021 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.39s 2026-03-18 02:23:53.680037 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-03-18 02:23:53.680051 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.36s 2026-03-18 02:23:53.680066 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.25s 2026-03-18 02:23:53.680080 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2026-03-18 02:23:53.680092 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.91s 2026-03-18 02:23:53.680114 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.85s 2026-03-18 02:23:53.680127 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.82s 2026-03-18 02:23:53.680140 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.80s 2026-03-18 02:23:53.680153 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-03-18 02:23:53.680167 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.69s 2026-03-18 02:23:56.184604 | orchestrator | 2026-03-18 02:23:56 | INFO  | Task d30c00f7-6a80-4243-835b-2e15c6ab6be1 (openvswitch) was prepared for execution. 2026-03-18 02:23:56.184752 | orchestrator | 2026-03-18 02:23:56 | INFO  | It takes a moment until task d30c00f7-6a80-4243-835b-2e15c6ab6be1 (openvswitch) has been started and output is visible here. 2026-03-18 02:24:09.404430 | orchestrator | 2026-03-18 02:24:09.404522 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:24:09.404533 | orchestrator | 2026-03-18 02:24:09.404541 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:24:09.404548 | orchestrator | Wednesday 18 March 2026 02:24:00 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-03-18 02:24:09.404556 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:24:09.404564 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:24:09.404571 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:24:09.404580 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:24:09.404592 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:24:09.404604 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:24:09.404611 | orchestrator | 2026-03-18 02:24:09.404618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:24:09.404625 | orchestrator | Wednesday 18 March 2026 02:24:01 +0000 (0:00:00.692) 0:00:00.982 ******* 2026-03-18 02:24:09.404632 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404640 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404647 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404654 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404660 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404667 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 02:24:09.404674 | orchestrator | 2026-03-18 02:24:09.404680 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-18 02:24:09.404687 | orchestrator | 2026-03-18 02:24:09.404710 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-18 02:24:09.404801 | orchestrator | Wednesday 18 March 2026 02:24:02 +0000 (0:00:00.607) 0:00:01.589 ******* 2026-03-18 02:24:09.404811 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:24:09.404819 | orchestrator | 2026-03-18 02:24:09.404826 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-18 02:24:09.404833 | orchestrator | Wednesday 18 March 2026 02:24:03 +0000 (0:00:01.196) 0:00:02.786 ******* 2026-03-18 02:24:09.404840 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-18 02:24:09.404847 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-18 02:24:09.404854 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-18 02:24:09.404860 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-18 02:24:09.404867 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-18 02:24:09.404873 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-18 02:24:09.404880 | orchestrator | 2026-03-18 02:24:09.404887 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-18 02:24:09.404893 | orchestrator | Wednesday 18 March 2026 02:24:04 +0000 (0:00:01.332) 0:00:04.119 ******* 2026-03-18 02:24:09.404900 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-18 02:24:09.404907 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-18 02:24:09.404913 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-18 02:24:09.404920 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-18 02:24:09.404927 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-18 02:24:09.404933 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-18 02:24:09.404940 | orchestrator | 2026-03-18 02:24:09.404946 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-18 02:24:09.404953 | orchestrator | Wednesday 18 March 2026 02:24:06 +0000 (0:00:01.502) 0:00:05.621 ******* 2026-03-18 02:24:09.404960 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-18 02:24:09.404967 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:24:09.404974 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-18 02:24:09.404981 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:24:09.404989 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-18 02:24:09.404997 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:24:09.405004 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-18 02:24:09.405012 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:24:09.405019 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-18 02:24:09.405027 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:24:09.405034 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-18 02:24:09.405042 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:24:09.405049 | orchestrator | 2026-03-18 02:24:09.405058 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-18 02:24:09.405066 | orchestrator | Wednesday 18 March 2026 02:24:07 +0000 (0:00:01.207) 0:00:06.829 ******* 2026-03-18 02:24:09.405073 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:24:09.405082 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:24:09.405089 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:24:09.405097 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:24:09.405105 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:24:09.405112 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:24:09.405120 | orchestrator | 2026-03-18 02:24:09.405128 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-18 02:24:09.405136 | orchestrator | Wednesday 18 March 2026 02:24:08 +0000 (0:00:00.803) 0:00:07.633 ******* 2026-03-18 02:24:09.405212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:09.405235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:09.405244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:09.405368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:09.405389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:09.405409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949301 | orchestrator | 2026-03-18 02:24:11.949310 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-18 02:24:11.949319 | orchestrator | Wednesday 18 March 2026 02:24:09 +0000 (0:00:01.430) 0:00:09.063 ******* 2026-03-18 02:24:11.949326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:11.949381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:14.753965 | orchestrator | 2026-03-18 02:24:14.753972 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-18 02:24:14.753980 | orchestrator | Wednesday 18 March 2026 02:24:12 +0000 (0:00:02.572) 0:00:11.635 ******* 2026-03-18 02:24:14.753988 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:24:14.753996 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:24:14.754003 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:24:14.754010 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:24:14.754064 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:24:14.754071 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:24:14.754079 | orchestrator | 2026-03-18 02:24:14.754087 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-18 02:24:14.754094 | orchestrator | Wednesday 18 March 2026 02:24:13 +0000 (0:00:01.014) 0:00:12.650 ******* 2026-03-18 02:24:14.754102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:14.754110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:14.754129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:14.754136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:14.754153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 02:24:39.848687 | orchestrator | 2026-03-18 02:24:39.848694 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848702 | orchestrator | Wednesday 18 March 2026 02:24:14 +0000 (0:00:01.768) 0:00:14.418 ******* 2026-03-18 02:24:39.848708 | orchestrator | 2026-03-18 02:24:39.848714 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848735 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.345) 0:00:14.764 ******* 2026-03-18 02:24:39.848783 | orchestrator | 2026-03-18 02:24:39.848790 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848801 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.144) 0:00:14.908 ******* 2026-03-18 02:24:39.848805 | orchestrator | 2026-03-18 02:24:39.848809 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848812 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.134) 0:00:15.043 ******* 2026-03-18 02:24:39.848816 | orchestrator | 2026-03-18 02:24:39.848820 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848823 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.162) 0:00:15.205 ******* 2026-03-18 02:24:39.848827 | orchestrator | 2026-03-18 02:24:39.848831 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 02:24:39.848834 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.135) 0:00:15.340 ******* 2026-03-18 02:24:39.848838 | orchestrator | 2026-03-18 02:24:39.848842 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-18 02:24:39.848846 | orchestrator | Wednesday 18 March 2026 02:24:15 +0000 (0:00:00.134) 0:00:15.475 ******* 2026-03-18 02:24:39.848849 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:24:39.848855 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:24:39.848858 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:24:39.848862 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:24:39.848866 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:24:39.848869 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:24:39.848873 | orchestrator | 2026-03-18 02:24:39.848877 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-18 02:24:39.848881 | orchestrator | Wednesday 18 March 2026 02:24:24 +0000 (0:00:08.500) 0:00:23.976 ******* 2026-03-18 02:24:39.848885 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:24:39.848890 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:24:39.848894 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:24:39.848898 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:24:39.848905 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:24:39.848909 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:24:39.848913 | orchestrator | 2026-03-18 02:24:39.848917 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-18 02:24:39.848921 | orchestrator | Wednesday 18 March 2026 02:24:25 +0000 (0:00:01.123) 0:00:25.099 ******* 2026-03-18 02:24:39.848924 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:24:39.848928 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:24:39.848932 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:24:39.848936 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:24:39.848939 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:24:39.848943 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:24:39.848947 | orchestrator | 2026-03-18 02:24:39.848950 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-18 02:24:39.848954 | orchestrator | Wednesday 18 March 2026 02:24:33 +0000 (0:00:07.837) 0:00:32.937 ******* 2026-03-18 02:24:39.848958 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-18 02:24:39.848962 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-18 02:24:39.848966 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-18 02:24:39.848970 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-18 02:24:39.848974 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-18 02:24:39.848977 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-18 02:24:39.848981 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-18 02:24:39.848990 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-18 02:24:52.946313 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-18 02:24:52.946432 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-18 02:24:52.946447 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-18 02:24:52.946459 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-18 02:24:52.946470 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946481 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946492 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946503 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946514 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946525 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 02:24:52.946536 | orchestrator | 2026-03-18 02:24:52.946548 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-18 02:24:52.946560 | orchestrator | Wednesday 18 March 2026 02:24:39 +0000 (0:00:06.477) 0:00:39.414 ******* 2026-03-18 02:24:52.946572 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-18 02:24:52.946584 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:24:52.946597 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-18 02:24:52.946608 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:24:52.946618 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-18 02:24:52.946629 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:24:52.946640 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-18 02:24:52.946651 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-18 02:24:52.946661 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-18 02:24:52.946672 | orchestrator | 2026-03-18 02:24:52.946683 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-18 02:24:52.946694 | orchestrator | Wednesday 18 March 2026 02:24:42 +0000 (0:00:02.429) 0:00:41.843 ******* 2026-03-18 02:24:52.946705 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-18 02:24:52.946716 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:24:52.946764 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-18 02:24:52.946775 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:24:52.946786 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-18 02:24:52.946797 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:24:52.946808 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-18 02:24:52.946819 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-18 02:24:52.946830 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-18 02:24:52.946842 | orchestrator | 2026-03-18 02:24:52.946877 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-18 02:24:52.946896 | orchestrator | Wednesday 18 March 2026 02:24:45 +0000 (0:00:03.091) 0:00:44.935 ******* 2026-03-18 02:24:52.946914 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:24:52.946931 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:24:52.946947 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:24:52.946965 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:24:52.947011 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:24:52.947030 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:24:52.947048 | orchestrator | 2026-03-18 02:24:52.947065 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:24:52.947085 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 02:24:52.947106 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 02:24:52.947125 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 02:24:52.947144 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:24:52.947162 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:24:52.947182 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:24:52.947202 | orchestrator | 2026-03-18 02:24:52.947220 | orchestrator | 2026-03-18 02:24:52.947239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:24:52.947252 | orchestrator | Wednesday 18 March 2026 02:24:52 +0000 (0:00:07.151) 0:00:52.087 ******* 2026-03-18 02:24:52.947284 | orchestrator | =============================================================================== 2026-03-18 02:24:52.947296 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.99s 2026-03-18 02:24:52.947307 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.50s 2026-03-18 02:24:52.947317 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.48s 2026-03-18 02:24:52.947328 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.09s 2026-03-18 02:24:52.947339 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.57s 2026-03-18 02:24:52.947349 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.43s 2026-03-18 02:24:52.947360 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.77s 2026-03-18 02:24:52.947370 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.50s 2026-03-18 02:24:52.947381 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.43s 2026-03-18 02:24:52.947392 | orchestrator | module-load : Load modules ---------------------------------------------- 1.33s 2026-03-18 02:24:52.947403 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.21s 2026-03-18 02:24:52.947413 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.20s 2026-03-18 02:24:52.947424 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.12s 2026-03-18 02:24:52.947434 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.06s 2026-03-18 02:24:52.947445 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.01s 2026-03-18 02:24:52.947455 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2026-03-18 02:24:52.947466 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2026-03-18 02:24:52.947477 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-18 02:24:55.478923 | orchestrator | 2026-03-18 02:24:55 | INFO  | Task ace0823c-7b20-49e1-a57e-78bcb0c9a65a (ovn) was prepared for execution. 2026-03-18 02:24:55.478995 | orchestrator | 2026-03-18 02:24:55 | INFO  | It takes a moment until task ace0823c-7b20-49e1-a57e-78bcb0c9a65a (ovn) has been started and output is visible here. 2026-03-18 02:25:06.621345 | orchestrator | 2026-03-18 02:25:06.621483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:25:06.621512 | orchestrator | 2026-03-18 02:25:06.621529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:25:06.621546 | orchestrator | Wednesday 18 March 2026 02:24:59 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-03-18 02:25:06.621562 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:25:06.621602 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:25:06.621618 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:25:06.621635 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:25:06.621652 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:25:06.621670 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:25:06.621687 | orchestrator | 2026-03-18 02:25:06.621752 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:25:06.621770 | orchestrator | Wednesday 18 March 2026 02:25:00 +0000 (0:00:00.723) 0:00:00.890 ******* 2026-03-18 02:25:06.621786 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-18 02:25:06.621826 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-18 02:25:06.621843 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-18 02:25:06.621860 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-18 02:25:06.621877 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-18 02:25:06.621893 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-18 02:25:06.621910 | orchestrator | 2026-03-18 02:25:06.621927 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-18 02:25:06.621946 | orchestrator | 2026-03-18 02:25:06.621964 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-18 02:25:06.621981 | orchestrator | Wednesday 18 March 2026 02:25:01 +0000 (0:00:00.843) 0:00:01.733 ******* 2026-03-18 02:25:06.621998 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:25:06.622087 | orchestrator | 2026-03-18 02:25:06.622113 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-18 02:25:06.622130 | orchestrator | Wednesday 18 March 2026 02:25:02 +0000 (0:00:01.225) 0:00:02.959 ******* 2026-03-18 02:25:06.622152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622313 | orchestrator | 2026-03-18 02:25:06.622328 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-18 02:25:06.622344 | orchestrator | Wednesday 18 March 2026 02:25:03 +0000 (0:00:01.278) 0:00:04.237 ******* 2026-03-18 02:25:06.622361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622478 | orchestrator | 2026-03-18 02:25:06.622491 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-18 02:25:06.622508 | orchestrator | Wednesday 18 March 2026 02:25:05 +0000 (0:00:01.568) 0:00:05.805 ******* 2026-03-18 02:25:06.622524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:06.622570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704531 | orchestrator | 2026-03-18 02:25:30.704542 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-18 02:25:30.704553 | orchestrator | Wednesday 18 March 2026 02:25:06 +0000 (0:00:01.203) 0:00:07.009 ******* 2026-03-18 02:25:30.704564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704710 | orchestrator | 2026-03-18 02:25:30.704721 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-18 02:25:30.704731 | orchestrator | Wednesday 18 March 2026 02:25:08 +0000 (0:00:01.609) 0:00:08.618 ******* 2026-03-18 02:25:30.704748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:25:30.704815 | orchestrator | 2026-03-18 02:25:30.704825 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-18 02:25:30.704834 | orchestrator | Wednesday 18 March 2026 02:25:09 +0000 (0:00:01.353) 0:00:09.972 ******* 2026-03-18 02:25:30.704845 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:25:30.704856 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:25:30.704866 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:25:30.704875 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:25:30.704885 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:25:30.704894 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:25:30.704903 | orchestrator | 2026-03-18 02:25:30.704914 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-18 02:25:30.704925 | orchestrator | Wednesday 18 March 2026 02:25:12 +0000 (0:00:02.504) 0:00:12.477 ******* 2026-03-18 02:25:30.704936 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-18 02:25:30.704947 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-18 02:25:30.704958 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-18 02:25:30.704968 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-18 02:25:30.704978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-18 02:25:30.704990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-18 02:25:30.705007 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171345 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171463 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171538 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 02:26:06.171557 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171576 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171592 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171668 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171690 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-18 02:26:06.171726 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171763 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171797 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 02:26:06.171834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171869 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171905 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 02:26:06.171942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.171961 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.171979 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.171997 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.172015 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.172033 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 02:26:06.172050 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 02:26:06.172068 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 02:26:06.172086 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 02:26:06.172102 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 02:26:06.172121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 02:26:06.172139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 02:26:06.172156 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-18 02:26:06.172194 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-18 02:26:06.172221 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-18 02:26:06.172241 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-18 02:26:06.172265 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-18 02:26:06.172283 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-18 02:26:06.172300 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 02:26:06.172317 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 02:26:06.172335 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 02:26:06.172353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 02:26:06.172370 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 02:26:06.172388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 02:26:06.172405 | orchestrator | 2026-03-18 02:26:06.172423 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172441 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:18.009) 0:00:30.486 ******* 2026-03-18 02:26:06.172459 | orchestrator | 2026-03-18 02:26:06.172477 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172494 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.255) 0:00:30.741 ******* 2026-03-18 02:26:06.172511 | orchestrator | 2026-03-18 02:26:06.172528 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172546 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.066) 0:00:30.807 ******* 2026-03-18 02:26:06.172564 | orchestrator | 2026-03-18 02:26:06.172582 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172600 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.066) 0:00:30.874 ******* 2026-03-18 02:26:06.172645 | orchestrator | 2026-03-18 02:26:06.172662 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172678 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.070) 0:00:30.944 ******* 2026-03-18 02:26:06.172694 | orchestrator | 2026-03-18 02:26:06.172710 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 02:26:06.172726 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.075) 0:00:31.020 ******* 2026-03-18 02:26:06.172742 | orchestrator | 2026-03-18 02:26:06.172759 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-18 02:26:06.172776 | orchestrator | Wednesday 18 March 2026 02:25:30 +0000 (0:00:00.066) 0:00:31.086 ******* 2026-03-18 02:26:06.172792 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:26:06.172810 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:06.172826 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:06.172842 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:26:06.172857 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:26:06.172873 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:06.172889 | orchestrator | 2026-03-18 02:26:06.172905 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-18 02:26:06.172922 | orchestrator | Wednesday 18 March 2026 02:25:32 +0000 (0:00:01.705) 0:00:32.792 ******* 2026-03-18 02:26:06.172941 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:06.172958 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:26:06.172985 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:26:06.173001 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:26:06.173018 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:26:06.173034 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:26:06.173052 | orchestrator | 2026-03-18 02:26:06.173066 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-18 02:26:06.173075 | orchestrator | 2026-03-18 02:26:06.173083 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 02:26:06.173091 | orchestrator | Wednesday 18 March 2026 02:26:03 +0000 (0:00:31.525) 0:01:04.318 ******* 2026-03-18 02:26:06.173099 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:26:06.173106 | orchestrator | 2026-03-18 02:26:06.173114 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 02:26:06.173122 | orchestrator | Wednesday 18 March 2026 02:26:04 +0000 (0:00:00.777) 0:01:05.095 ******* 2026-03-18 02:26:06.173130 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:26:06.173138 | orchestrator | 2026-03-18 02:26:06.173145 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-18 02:26:06.173153 | orchestrator | Wednesday 18 March 2026 02:26:05 +0000 (0:00:00.551) 0:01:05.647 ******* 2026-03-18 02:26:06.173161 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:06.173169 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:06.173177 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:06.173185 | orchestrator | 2026-03-18 02:26:06.173192 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-18 02:26:06.173208 | orchestrator | Wednesday 18 March 2026 02:26:06 +0000 (0:00:00.906) 0:01:06.553 ******* 2026-03-18 02:26:17.581075 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.581185 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.581198 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.581209 | orchestrator | 2026-03-18 02:26:17.581220 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-18 02:26:17.581232 | orchestrator | Wednesday 18 March 2026 02:26:06 +0000 (0:00:00.348) 0:01:06.902 ******* 2026-03-18 02:26:17.581241 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.581267 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.581278 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.581288 | orchestrator | 2026-03-18 02:26:17.581298 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-18 02:26:17.581308 | orchestrator | Wednesday 18 March 2026 02:26:06 +0000 (0:00:00.346) 0:01:07.249 ******* 2026-03-18 02:26:17.581317 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.581327 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.581336 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.581346 | orchestrator | 2026-03-18 02:26:17.581355 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-18 02:26:17.581365 | orchestrator | Wednesday 18 March 2026 02:26:07 +0000 (0:00:00.307) 0:01:07.557 ******* 2026-03-18 02:26:17.581375 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.581384 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.581394 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.581403 | orchestrator | 2026-03-18 02:26:17.581413 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-18 02:26:17.581423 | orchestrator | Wednesday 18 March 2026 02:26:07 +0000 (0:00:00.524) 0:01:08.081 ******* 2026-03-18 02:26:17.581432 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581443 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581453 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581462 | orchestrator | 2026-03-18 02:26:17.581472 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-18 02:26:17.581481 | orchestrator | Wednesday 18 March 2026 02:26:07 +0000 (0:00:00.296) 0:01:08.378 ******* 2026-03-18 02:26:17.581510 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581521 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581530 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581540 | orchestrator | 2026-03-18 02:26:17.581549 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-18 02:26:17.581559 | orchestrator | Wednesday 18 March 2026 02:26:08 +0000 (0:00:00.330) 0:01:08.708 ******* 2026-03-18 02:26:17.581569 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581578 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581588 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581597 | orchestrator | 2026-03-18 02:26:17.581636 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-18 02:26:17.581648 | orchestrator | Wednesday 18 March 2026 02:26:08 +0000 (0:00:00.301) 0:01:09.010 ******* 2026-03-18 02:26:17.581659 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581670 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581681 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581692 | orchestrator | 2026-03-18 02:26:17.581703 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-18 02:26:17.581714 | orchestrator | Wednesday 18 March 2026 02:26:08 +0000 (0:00:00.294) 0:01:09.305 ******* 2026-03-18 02:26:17.581725 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581736 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581747 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581759 | orchestrator | 2026-03-18 02:26:17.581769 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-18 02:26:17.581780 | orchestrator | Wednesday 18 March 2026 02:26:09 +0000 (0:00:00.517) 0:01:09.822 ******* 2026-03-18 02:26:17.581791 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581802 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581813 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581824 | orchestrator | 2026-03-18 02:26:17.581835 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-18 02:26:17.581846 | orchestrator | Wednesday 18 March 2026 02:26:09 +0000 (0:00:00.303) 0:01:10.126 ******* 2026-03-18 02:26:17.581857 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581868 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581879 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581888 | orchestrator | 2026-03-18 02:26:17.581898 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-18 02:26:17.581907 | orchestrator | Wednesday 18 March 2026 02:26:10 +0000 (0:00:00.327) 0:01:10.453 ******* 2026-03-18 02:26:17.581917 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581926 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581936 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.581945 | orchestrator | 2026-03-18 02:26:17.581955 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-18 02:26:17.581965 | orchestrator | Wednesday 18 March 2026 02:26:10 +0000 (0:00:00.300) 0:01:10.754 ******* 2026-03-18 02:26:17.581974 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.581984 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.581993 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582003 | orchestrator | 2026-03-18 02:26:17.582012 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-18 02:26:17.582084 | orchestrator | Wednesday 18 March 2026 02:26:10 +0000 (0:00:00.536) 0:01:11.290 ******* 2026-03-18 02:26:17.582094 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582104 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582113 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582123 | orchestrator | 2026-03-18 02:26:17.582133 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-18 02:26:17.582143 | orchestrator | Wednesday 18 March 2026 02:26:11 +0000 (0:00:00.326) 0:01:11.617 ******* 2026-03-18 02:26:17.582153 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582170 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582179 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582189 | orchestrator | 2026-03-18 02:26:17.582199 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-18 02:26:17.582209 | orchestrator | Wednesday 18 March 2026 02:26:11 +0000 (0:00:00.297) 0:01:11.914 ******* 2026-03-18 02:26:17.582244 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582255 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582264 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582274 | orchestrator | 2026-03-18 02:26:17.582284 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 02:26:17.582293 | orchestrator | Wednesday 18 March 2026 02:26:11 +0000 (0:00:00.291) 0:01:12.206 ******* 2026-03-18 02:26:17.582309 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:26:17.582319 | orchestrator | 2026-03-18 02:26:17.582329 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-18 02:26:17.582338 | orchestrator | Wednesday 18 March 2026 02:26:12 +0000 (0:00:00.849) 0:01:13.056 ******* 2026-03-18 02:26:17.582348 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.582358 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.582367 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.582377 | orchestrator | 2026-03-18 02:26:17.582386 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-18 02:26:17.582396 | orchestrator | Wednesday 18 March 2026 02:26:13 +0000 (0:00:00.463) 0:01:13.519 ******* 2026-03-18 02:26:17.582405 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:17.582415 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:17.582424 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:17.582434 | orchestrator | 2026-03-18 02:26:17.582444 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-18 02:26:17.582453 | orchestrator | Wednesday 18 March 2026 02:26:13 +0000 (0:00:00.460) 0:01:13.980 ******* 2026-03-18 02:26:17.582463 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582472 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582482 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582491 | orchestrator | 2026-03-18 02:26:17.582501 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-18 02:26:17.582511 | orchestrator | Wednesday 18 March 2026 02:26:13 +0000 (0:00:00.348) 0:01:14.329 ******* 2026-03-18 02:26:17.582520 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582530 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582539 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582549 | orchestrator | 2026-03-18 02:26:17.582559 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-18 02:26:17.582568 | orchestrator | Wednesday 18 March 2026 02:26:14 +0000 (0:00:00.616) 0:01:14.945 ******* 2026-03-18 02:26:17.582578 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582587 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582597 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582625 | orchestrator | 2026-03-18 02:26:17.582635 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-18 02:26:17.582644 | orchestrator | Wednesday 18 March 2026 02:26:14 +0000 (0:00:00.351) 0:01:15.296 ******* 2026-03-18 02:26:17.582654 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582663 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582673 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582682 | orchestrator | 2026-03-18 02:26:17.582692 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-18 02:26:17.582702 | orchestrator | Wednesday 18 March 2026 02:26:15 +0000 (0:00:00.327) 0:01:15.624 ******* 2026-03-18 02:26:17.582711 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582721 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582741 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582751 | orchestrator | 2026-03-18 02:26:17.582761 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-18 02:26:17.582770 | orchestrator | Wednesday 18 March 2026 02:26:15 +0000 (0:00:00.335) 0:01:15.960 ******* 2026-03-18 02:26:17.582780 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:17.582789 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:17.582799 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:17.582808 | orchestrator | 2026-03-18 02:26:17.582818 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-18 02:26:17.582827 | orchestrator | Wednesday 18 March 2026 02:26:16 +0000 (0:00:00.588) 0:01:16.548 ******* 2026-03-18 02:26:17.582840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:17.582852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:17.582862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:17.582881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893571 | orchestrator | 2026-03-18 02:26:23.893584 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-18 02:26:23.893646 | orchestrator | Wednesday 18 March 2026 02:26:17 +0000 (0:00:01.420) 0:01:17.969 ******* 2026-03-18 02:26:23.893661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893824 | orchestrator | 2026-03-18 02:26:23.893845 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-18 02:26:23.893865 | orchestrator | Wednesday 18 March 2026 02:26:21 +0000 (0:00:03.800) 0:01:21.769 ******* 2026-03-18 02:26:23.893885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.893965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:23.894009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.378364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.378470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.378503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.378514 | orchestrator | 2026-03-18 02:26:47.378525 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:47.378535 | orchestrator | Wednesday 18 March 2026 02:26:23 +0000 (0:00:02.072) 0:01:23.842 ******* 2026-03-18 02:26:47.378543 | orchestrator | 2026-03-18 02:26:47.378552 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:47.378560 | orchestrator | Wednesday 18 March 2026 02:26:23 +0000 (0:00:00.071) 0:01:23.913 ******* 2026-03-18 02:26:47.378569 | orchestrator | 2026-03-18 02:26:47.378643 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:47.378652 | orchestrator | Wednesday 18 March 2026 02:26:23 +0000 (0:00:00.269) 0:01:24.183 ******* 2026-03-18 02:26:47.378661 | orchestrator | 2026-03-18 02:26:47.378669 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-18 02:26:47.378678 | orchestrator | Wednesday 18 March 2026 02:26:23 +0000 (0:00:00.094) 0:01:24.277 ******* 2026-03-18 02:26:47.378687 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:47.378697 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:26:47.378705 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:26:47.378714 | orchestrator | 2026-03-18 02:26:47.378723 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-18 02:26:47.378731 | orchestrator | Wednesday 18 March 2026 02:26:31 +0000 (0:00:07.484) 0:01:31.761 ******* 2026-03-18 02:26:47.378740 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:26:47.378749 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:26:47.378757 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:47.378765 | orchestrator | 2026-03-18 02:26:47.378774 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-18 02:26:47.378783 | orchestrator | Wednesday 18 March 2026 02:26:37 +0000 (0:00:06.454) 0:01:38.216 ******* 2026-03-18 02:26:47.378791 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:47.378800 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:26:47.378808 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:26:47.378817 | orchestrator | 2026-03-18 02:26:47.378825 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-18 02:26:47.378834 | orchestrator | Wednesday 18 March 2026 02:26:40 +0000 (0:00:02.408) 0:01:40.625 ******* 2026-03-18 02:26:47.378843 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:26:47.378851 | orchestrator | 2026-03-18 02:26:47.378860 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-18 02:26:47.378868 | orchestrator | Wednesday 18 March 2026 02:26:40 +0000 (0:00:00.127) 0:01:40.752 ******* 2026-03-18 02:26:47.378877 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:47.378887 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:47.378897 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:47.378908 | orchestrator | 2026-03-18 02:26:47.378918 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-18 02:26:47.378927 | orchestrator | Wednesday 18 March 2026 02:26:41 +0000 (0:00:01.036) 0:01:41.789 ******* 2026-03-18 02:26:47.378936 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:47.378946 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:47.378955 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:47.378973 | orchestrator | 2026-03-18 02:26:47.378983 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-18 02:26:47.378992 | orchestrator | Wednesday 18 March 2026 02:26:41 +0000 (0:00:00.614) 0:01:42.403 ******* 2026-03-18 02:26:47.379002 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:47.379012 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:47.379021 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:47.379031 | orchestrator | 2026-03-18 02:26:47.379040 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-18 02:26:47.379050 | orchestrator | Wednesday 18 March 2026 02:26:42 +0000 (0:00:00.783) 0:01:43.187 ******* 2026-03-18 02:26:47.379060 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:26:47.379070 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:26:47.379093 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:26:47.379102 | orchestrator | 2026-03-18 02:26:47.379111 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-18 02:26:47.379119 | orchestrator | Wednesday 18 March 2026 02:26:43 +0000 (0:00:00.625) 0:01:43.812 ******* 2026-03-18 02:26:47.379128 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:47.379137 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:47.379161 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:47.379170 | orchestrator | 2026-03-18 02:26:47.379179 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-18 02:26:47.379187 | orchestrator | Wednesday 18 March 2026 02:26:44 +0000 (0:00:01.405) 0:01:45.217 ******* 2026-03-18 02:26:47.379196 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:47.379204 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:47.379212 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:47.379221 | orchestrator | 2026-03-18 02:26:47.379230 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-18 02:26:47.379239 | orchestrator | Wednesday 18 March 2026 02:26:45 +0000 (0:00:00.764) 0:01:45.981 ******* 2026-03-18 02:26:47.379247 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:26:47.379256 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:26:47.379264 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:26:47.379273 | orchestrator | 2026-03-18 02:26:47.379281 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-18 02:26:47.379290 | orchestrator | Wednesday 18 March 2026 02:26:45 +0000 (0:00:00.339) 0:01:46.321 ******* 2026-03-18 02:26:47.379301 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379313 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379321 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379345 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379354 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379377 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:47.379394 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575140 | orchestrator | 2026-03-18 02:26:54.575237 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-18 02:26:54.575250 | orchestrator | Wednesday 18 March 2026 02:26:47 +0000 (0:00:01.445) 0:01:47.766 ******* 2026-03-18 02:26:54.575262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575273 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575290 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575338 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575377 | orchestrator | 2026-03-18 02:26:54.575386 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-18 02:26:54.575394 | orchestrator | Wednesday 18 March 2026 02:26:51 +0000 (0:00:03.971) 0:01:51.737 ******* 2026-03-18 02:26:54.575418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575427 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575435 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575476 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 02:26:54.575501 | orchestrator | 2026-03-18 02:26:54.575513 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:54.575521 | orchestrator | Wednesday 18 March 2026 02:26:54 +0000 (0:00:02.998) 0:01:54.735 ******* 2026-03-18 02:26:54.575529 | orchestrator | 2026-03-18 02:26:54.575537 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:54.575545 | orchestrator | Wednesday 18 March 2026 02:26:54 +0000 (0:00:00.065) 0:01:54.801 ******* 2026-03-18 02:26:54.575553 | orchestrator | 2026-03-18 02:26:54.575561 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 02:26:54.575649 | orchestrator | Wednesday 18 March 2026 02:26:54 +0000 (0:00:00.075) 0:01:54.877 ******* 2026-03-18 02:26:54.575661 | orchestrator | 2026-03-18 02:26:54.575676 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-18 02:27:18.969870 | orchestrator | Wednesday 18 March 2026 02:26:54 +0000 (0:00:00.070) 0:01:54.947 ******* 2026-03-18 02:27:18.969965 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:27:18.969977 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:27:18.969986 | orchestrator | 2026-03-18 02:27:18.969995 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-18 02:27:18.970003 | orchestrator | Wednesday 18 March 2026 02:27:00 +0000 (0:00:06.258) 0:02:01.205 ******* 2026-03-18 02:27:18.970011 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:27:18.970074 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:27:18.970082 | orchestrator | 2026-03-18 02:27:18.970090 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-18 02:27:18.970098 | orchestrator | Wednesday 18 March 2026 02:27:06 +0000 (0:00:06.187) 0:02:07.393 ******* 2026-03-18 02:27:18.970127 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:27:18.970135 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:27:18.970143 | orchestrator | 2026-03-18 02:27:18.970150 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-18 02:27:18.970157 | orchestrator | Wednesday 18 March 2026 02:27:13 +0000 (0:00:06.214) 0:02:13.608 ******* 2026-03-18 02:27:18.970164 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:27:18.970171 | orchestrator | 2026-03-18 02:27:18.970179 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-18 02:27:18.970186 | orchestrator | Wednesday 18 March 2026 02:27:13 +0000 (0:00:00.199) 0:02:13.807 ******* 2026-03-18 02:27:18.970192 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:27:18.970201 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:27:18.970207 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:27:18.970214 | orchestrator | 2026-03-18 02:27:18.970221 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-18 02:27:18.970228 | orchestrator | Wednesday 18 March 2026 02:27:14 +0000 (0:00:01.066) 0:02:14.873 ******* 2026-03-18 02:27:18.970235 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:27:18.970242 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:27:18.970249 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:27:18.970256 | orchestrator | 2026-03-18 02:27:18.970263 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-18 02:27:18.970270 | orchestrator | Wednesday 18 March 2026 02:27:15 +0000 (0:00:00.656) 0:02:15.530 ******* 2026-03-18 02:27:18.970277 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:27:18.970286 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:27:18.970294 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:27:18.970302 | orchestrator | 2026-03-18 02:27:18.970310 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-18 02:27:18.970318 | orchestrator | Wednesday 18 March 2026 02:27:15 +0000 (0:00:00.818) 0:02:16.349 ******* 2026-03-18 02:27:18.970326 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:27:18.970334 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:27:18.970342 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:27:18.970350 | orchestrator | 2026-03-18 02:27:18.970358 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-18 02:27:18.970366 | orchestrator | Wednesday 18 March 2026 02:27:16 +0000 (0:00:00.618) 0:02:16.967 ******* 2026-03-18 02:27:18.970373 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:27:18.970381 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:27:18.970388 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:27:18.970395 | orchestrator | 2026-03-18 02:27:18.970403 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-18 02:27:18.970410 | orchestrator | Wednesday 18 March 2026 02:27:17 +0000 (0:00:01.036) 0:02:18.004 ******* 2026-03-18 02:27:18.970418 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:27:18.970426 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:27:18.970433 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:27:18.970442 | orchestrator | 2026-03-18 02:27:18.970450 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:27:18.970459 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-18 02:27:18.970469 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-18 02:27:18.970477 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-18 02:27:18.970486 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:27:18.970494 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:27:18.970510 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:27:18.970518 | orchestrator | 2026-03-18 02:27:18.970526 | orchestrator | 2026-03-18 02:27:18.970534 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:27:18.970577 | orchestrator | Wednesday 18 March 2026 02:27:18 +0000 (0:00:00.906) 0:02:18.910 ******* 2026-03-18 02:27:18.970586 | orchestrator | =============================================================================== 2026-03-18 02:27:18.970594 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.53s 2026-03-18 02:27:18.970602 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.01s 2026-03-18 02:27:18.970610 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.74s 2026-03-18 02:27:18.970618 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.64s 2026-03-18 02:27:18.970626 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.62s 2026-03-18 02:27:18.970650 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.97s 2026-03-18 02:27:18.970658 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.80s 2026-03-18 02:27:18.970666 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.00s 2026-03-18 02:27:18.970674 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.50s 2026-03-18 02:27:18.970682 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.07s 2026-03-18 02:27:18.970690 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.71s 2026-03-18 02:27:18.970698 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.61s 2026-03-18 02:27:18.970706 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.57s 2026-03-18 02:27:18.970714 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-03-18 02:27:18.970722 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-03-18 02:27:18.970730 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.41s 2026-03-18 02:27:18.970738 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.35s 2026-03-18 02:27:18.970747 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.28s 2026-03-18 02:27:18.970755 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.23s 2026-03-18 02:27:18.970763 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.20s 2026-03-18 02:27:19.371652 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 02:27:19.371736 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-18 02:27:21.686641 | orchestrator | 2026-03-18 02:27:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-18 02:27:31.836701 | orchestrator | 2026-03-18 02:27:31 | INFO  | Task c2bf78bb-edb7-406a-8522-0dfceeff14a3 (wipe-partitions) was prepared for execution. 2026-03-18 02:27:31.836833 | orchestrator | 2026-03-18 02:27:31 | INFO  | It takes a moment until task c2bf78bb-edb7-406a-8522-0dfceeff14a3 (wipe-partitions) has been started and output is visible here. 2026-03-18 02:27:45.102359 | orchestrator | 2026-03-18 02:27:45.102477 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-18 02:27:45.102494 | orchestrator | 2026-03-18 02:27:45.102503 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-18 02:27:45.102544 | orchestrator | Wednesday 18 March 2026 02:27:36 +0000 (0:00:00.142) 0:00:00.142 ******* 2026-03-18 02:27:45.102552 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:27:45.102561 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:27:45.102595 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:27:45.102604 | orchestrator | 2026-03-18 02:27:45.102612 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-18 02:27:45.102619 | orchestrator | Wednesday 18 March 2026 02:27:36 +0000 (0:00:00.610) 0:00:00.753 ******* 2026-03-18 02:27:45.102627 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:27:45.102635 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:27:45.102642 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:27:45.102649 | orchestrator | 2026-03-18 02:27:45.102657 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-18 02:27:45.102676 | orchestrator | Wednesday 18 March 2026 02:27:37 +0000 (0:00:00.410) 0:00:01.163 ******* 2026-03-18 02:27:45.102691 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:27:45.102701 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:27:45.102709 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:27:45.102716 | orchestrator | 2026-03-18 02:27:45.102724 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-18 02:27:45.102731 | orchestrator | Wednesday 18 March 2026 02:27:38 +0000 (0:00:00.610) 0:00:01.773 ******* 2026-03-18 02:27:45.102739 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:27:45.102746 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:27:45.102754 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:27:45.102763 | orchestrator | 2026-03-18 02:27:45.102771 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-18 02:27:45.102778 | orchestrator | Wednesday 18 March 2026 02:27:38 +0000 (0:00:00.307) 0:00:02.081 ******* 2026-03-18 02:27:45.102786 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-18 02:27:45.102795 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-18 02:27:45.102802 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-18 02:27:45.102810 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-18 02:27:45.102818 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-18 02:27:45.102825 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-18 02:27:45.102833 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-18 02:27:45.102841 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-18 02:27:45.102863 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-18 02:27:45.102872 | orchestrator | 2026-03-18 02:27:45.102879 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-18 02:27:45.102886 | orchestrator | Wednesday 18 March 2026 02:27:39 +0000 (0:00:01.220) 0:00:03.302 ******* 2026-03-18 02:27:45.102893 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-18 02:27:45.102900 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-18 02:27:45.102906 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-18 02:27:45.102913 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-18 02:27:45.102920 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-18 02:27:45.102927 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-18 02:27:45.102934 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-18 02:27:45.102942 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-18 02:27:45.102949 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-18 02:27:45.102956 | orchestrator | 2026-03-18 02:27:45.102963 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-18 02:27:45.102971 | orchestrator | Wednesday 18 March 2026 02:27:41 +0000 (0:00:01.626) 0:00:04.929 ******* 2026-03-18 02:27:45.102978 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-18 02:27:45.102986 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-18 02:27:45.102994 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-18 02:27:45.103002 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-18 02:27:45.103009 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-18 02:27:45.103016 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-18 02:27:45.103033 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-18 02:27:45.103041 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-18 02:27:45.103048 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-18 02:27:45.103056 | orchestrator | 2026-03-18 02:27:45.103064 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-18 02:27:45.103071 | orchestrator | Wednesday 18 March 2026 02:27:43 +0000 (0:00:02.195) 0:00:07.124 ******* 2026-03-18 02:27:45.103079 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:27:45.103087 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:27:45.103095 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:27:45.103102 | orchestrator | 2026-03-18 02:27:45.103110 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-18 02:27:45.103117 | orchestrator | Wednesday 18 March 2026 02:27:43 +0000 (0:00:00.615) 0:00:07.739 ******* 2026-03-18 02:27:45.103125 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:27:45.103132 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:27:45.103139 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:27:45.103147 | orchestrator | 2026-03-18 02:27:45.103154 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:27:45.103164 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:27:45.103173 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:27:45.103201 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:27:45.103211 | orchestrator | 2026-03-18 02:27:45.103219 | orchestrator | 2026-03-18 02:27:45.103227 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:27:45.103234 | orchestrator | Wednesday 18 March 2026 02:27:44 +0000 (0:00:00.710) 0:00:08.450 ******* 2026-03-18 02:27:45.103242 | orchestrator | =============================================================================== 2026-03-18 02:27:45.103250 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.20s 2026-03-18 02:27:45.103257 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.63s 2026-03-18 02:27:45.103265 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-03-18 02:27:45.103273 | orchestrator | Request device events from the kernel ----------------------------------- 0.71s 2026-03-18 02:27:45.103280 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-18 02:27:45.103287 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.61s 2026-03-18 02:27:45.103295 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-03-18 02:27:45.103302 | orchestrator | Remove all rook related logical devices --------------------------------- 0.41s 2026-03-18 02:27:45.103310 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.31s 2026-03-18 02:27:57.835704 | orchestrator | 2026-03-18 02:27:57 | INFO  | Task c01241f2-6c4e-4293-9c40-f53e668d1733 (facts) was prepared for execution. 2026-03-18 02:27:57.835785 | orchestrator | 2026-03-18 02:27:57 | INFO  | It takes a moment until task c01241f2-6c4e-4293-9c40-f53e668d1733 (facts) has been started and output is visible here. 2026-03-18 02:28:11.625543 | orchestrator | 2026-03-18 02:28:11.625640 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-18 02:28:11.625648 | orchestrator | 2026-03-18 02:28:11.625653 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-18 02:28:11.625658 | orchestrator | Wednesday 18 March 2026 02:28:02 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-03-18 02:28:11.625662 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:28:11.625686 | orchestrator | ok: [testbed-manager] 2026-03-18 02:28:11.625690 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:28:11.625694 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:28:11.625698 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:11.625702 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:11.625706 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:28:11.625710 | orchestrator | 2026-03-18 02:28:11.625714 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-18 02:28:11.625718 | orchestrator | Wednesday 18 March 2026 02:28:03 +0000 (0:00:01.190) 0:00:01.488 ******* 2026-03-18 02:28:11.625722 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:28:11.625727 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:28:11.625732 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:28:11.625735 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:28:11.625739 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:11.625743 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:11.625747 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:28:11.625750 | orchestrator | 2026-03-18 02:28:11.625754 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 02:28:11.625758 | orchestrator | 2026-03-18 02:28:11.625762 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 02:28:11.625766 | orchestrator | Wednesday 18 March 2026 02:28:05 +0000 (0:00:01.426) 0:00:02.915 ******* 2026-03-18 02:28:11.625770 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:28:11.625773 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:28:11.625777 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:28:11.625781 | orchestrator | ok: [testbed-manager] 2026-03-18 02:28:11.625785 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:11.625788 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:28:11.625792 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:11.625796 | orchestrator | 2026-03-18 02:28:11.625800 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-18 02:28:11.625803 | orchestrator | 2026-03-18 02:28:11.625807 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-18 02:28:11.625811 | orchestrator | Wednesday 18 March 2026 02:28:10 +0000 (0:00:05.383) 0:00:08.299 ******* 2026-03-18 02:28:11.625815 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:28:11.625818 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:28:11.625822 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:28:11.625826 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:28:11.625830 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:11.625833 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:11.625837 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:28:11.625841 | orchestrator | 2026-03-18 02:28:11.625845 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:28:11.625849 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625912 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625919 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625923 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625927 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625930 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625934 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:28:11.625943 | orchestrator | 2026-03-18 02:28:11.625947 | orchestrator | 2026-03-18 02:28:11.625951 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:28:11.625954 | orchestrator | Wednesday 18 March 2026 02:28:11 +0000 (0:00:00.593) 0:00:08.892 ******* 2026-03-18 02:28:11.625959 | orchestrator | =============================================================================== 2026-03-18 02:28:11.625965 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.38s 2026-03-18 02:28:11.625972 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2026-03-18 02:28:11.625978 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2026-03-18 02:28:11.625983 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-18 02:28:14.263688 | orchestrator | 2026-03-18 02:28:14 | INFO  | Task ff6c873e-f43a-486c-9968-28fe25bd308c (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-18 02:28:14.263794 | orchestrator | 2026-03-18 02:28:14 | INFO  | It takes a moment until task ff6c873e-f43a-486c-9968-28fe25bd308c (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-18 02:28:28.051992 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 02:28:28.052090 | orchestrator | 2.16.14 2026-03-18 02:28:28.052101 | orchestrator | 2026-03-18 02:28:28.052109 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-18 02:28:28.052116 | orchestrator | 2026-03-18 02:28:28.052122 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:28:28.052129 | orchestrator | Wednesday 18 March 2026 02:28:19 +0000 (0:00:00.409) 0:00:00.409 ******* 2026-03-18 02:28:28.052137 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 02:28:28.052143 | orchestrator | 2026-03-18 02:28:28.052150 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:28:28.052170 | orchestrator | Wednesday 18 March 2026 02:28:19 +0000 (0:00:00.279) 0:00:00.689 ******* 2026-03-18 02:28:28.052177 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:28.052184 | orchestrator | 2026-03-18 02:28:28.052190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052196 | orchestrator | Wednesday 18 March 2026 02:28:20 +0000 (0:00:00.240) 0:00:00.929 ******* 2026-03-18 02:28:28.052203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-18 02:28:28.052210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-18 02:28:28.052216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-18 02:28:28.052222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-18 02:28:28.052228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-18 02:28:28.052235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-18 02:28:28.052241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-18 02:28:28.052247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-18 02:28:28.052254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-18 02:28:28.052260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-18 02:28:28.052267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-18 02:28:28.052273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-18 02:28:28.052279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-18 02:28:28.052303 | orchestrator | 2026-03-18 02:28:28.052310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052316 | orchestrator | Wednesday 18 March 2026 02:28:20 +0000 (0:00:00.550) 0:00:01.480 ******* 2026-03-18 02:28:28.052321 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052328 | orchestrator | 2026-03-18 02:28:28.052334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052340 | orchestrator | Wednesday 18 March 2026 02:28:20 +0000 (0:00:00.237) 0:00:01.717 ******* 2026-03-18 02:28:28.052346 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052352 | orchestrator | 2026-03-18 02:28:28.052358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052364 | orchestrator | Wednesday 18 March 2026 02:28:21 +0000 (0:00:00.246) 0:00:01.964 ******* 2026-03-18 02:28:28.052370 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052376 | orchestrator | 2026-03-18 02:28:28.052382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052388 | orchestrator | Wednesday 18 March 2026 02:28:21 +0000 (0:00:00.219) 0:00:02.183 ******* 2026-03-18 02:28:28.052394 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052400 | orchestrator | 2026-03-18 02:28:28.052406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052412 | orchestrator | Wednesday 18 March 2026 02:28:21 +0000 (0:00:00.225) 0:00:02.409 ******* 2026-03-18 02:28:28.052419 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052424 | orchestrator | 2026-03-18 02:28:28.052430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052436 | orchestrator | Wednesday 18 March 2026 02:28:21 +0000 (0:00:00.246) 0:00:02.655 ******* 2026-03-18 02:28:28.052442 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052447 | orchestrator | 2026-03-18 02:28:28.052453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052459 | orchestrator | Wednesday 18 March 2026 02:28:21 +0000 (0:00:00.223) 0:00:02.879 ******* 2026-03-18 02:28:28.052534 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052540 | orchestrator | 2026-03-18 02:28:28.052546 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052552 | orchestrator | Wednesday 18 March 2026 02:28:22 +0000 (0:00:00.222) 0:00:03.102 ******* 2026-03-18 02:28:28.052558 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052564 | orchestrator | 2026-03-18 02:28:28.052571 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052578 | orchestrator | Wednesday 18 March 2026 02:28:22 +0000 (0:00:00.254) 0:00:03.356 ******* 2026-03-18 02:28:28.052584 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561) 2026-03-18 02:28:28.052593 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561) 2026-03-18 02:28:28.052600 | orchestrator | 2026-03-18 02:28:28.052607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052629 | orchestrator | Wednesday 18 March 2026 02:28:22 +0000 (0:00:00.482) 0:00:03.838 ******* 2026-03-18 02:28:28.052636 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768) 2026-03-18 02:28:28.052643 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768) 2026-03-18 02:28:28.052650 | orchestrator | 2026-03-18 02:28:28.052656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052663 | orchestrator | Wednesday 18 March 2026 02:28:23 +0000 (0:00:00.722) 0:00:04.560 ******* 2026-03-18 02:28:28.052670 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e) 2026-03-18 02:28:28.052681 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e) 2026-03-18 02:28:28.052697 | orchestrator | 2026-03-18 02:28:28.052703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052709 | orchestrator | Wednesday 18 March 2026 02:28:24 +0000 (0:00:00.870) 0:00:05.431 ******* 2026-03-18 02:28:28.052715 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa) 2026-03-18 02:28:28.052721 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa) 2026-03-18 02:28:28.052728 | orchestrator | 2026-03-18 02:28:28.052734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:28.052741 | orchestrator | Wednesday 18 March 2026 02:28:25 +0000 (0:00:01.044) 0:00:06.475 ******* 2026-03-18 02:28:28.052747 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:28:28.052754 | orchestrator | 2026-03-18 02:28:28.052760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052767 | orchestrator | Wednesday 18 March 2026 02:28:25 +0000 (0:00:00.401) 0:00:06.877 ******* 2026-03-18 02:28:28.052773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-18 02:28:28.052780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-18 02:28:28.052786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-18 02:28:28.052793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-18 02:28:28.052799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-18 02:28:28.052805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-18 02:28:28.052812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-18 02:28:28.052819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-18 02:28:28.052825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-18 02:28:28.052832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-18 02:28:28.052838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-18 02:28:28.052844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-18 02:28:28.052850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-18 02:28:28.052857 | orchestrator | 2026-03-18 02:28:28.052863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052870 | orchestrator | Wednesday 18 March 2026 02:28:26 +0000 (0:00:00.444) 0:00:07.322 ******* 2026-03-18 02:28:28.052876 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052883 | orchestrator | 2026-03-18 02:28:28.052890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052896 | orchestrator | Wednesday 18 March 2026 02:28:26 +0000 (0:00:00.239) 0:00:07.562 ******* 2026-03-18 02:28:28.052903 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052911 | orchestrator | 2026-03-18 02:28:28.052917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052924 | orchestrator | Wednesday 18 March 2026 02:28:26 +0000 (0:00:00.234) 0:00:07.797 ******* 2026-03-18 02:28:28.052931 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052937 | orchestrator | 2026-03-18 02:28:28.052945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052951 | orchestrator | Wednesday 18 March 2026 02:28:27 +0000 (0:00:00.243) 0:00:08.040 ******* 2026-03-18 02:28:28.052957 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052963 | orchestrator | 2026-03-18 02:28:28.052970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.052981 | orchestrator | Wednesday 18 March 2026 02:28:27 +0000 (0:00:00.228) 0:00:08.269 ******* 2026-03-18 02:28:28.052987 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.052993 | orchestrator | 2026-03-18 02:28:28.052999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.053005 | orchestrator | Wednesday 18 March 2026 02:28:27 +0000 (0:00:00.223) 0:00:08.493 ******* 2026-03-18 02:28:28.053011 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.053017 | orchestrator | 2026-03-18 02:28:28.053023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:28.053029 | orchestrator | Wednesday 18 March 2026 02:28:27 +0000 (0:00:00.233) 0:00:08.726 ******* 2026-03-18 02:28:28.053035 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:28.053041 | orchestrator | 2026-03-18 02:28:28.053052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602252 | orchestrator | Wednesday 18 March 2026 02:28:28 +0000 (0:00:00.216) 0:00:08.942 ******* 2026-03-18 02:28:36.602358 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602374 | orchestrator | 2026-03-18 02:28:36.602388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602401 | orchestrator | Wednesday 18 March 2026 02:28:28 +0000 (0:00:00.227) 0:00:09.171 ******* 2026-03-18 02:28:36.602413 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-18 02:28:36.602425 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-18 02:28:36.602438 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-18 02:28:36.602449 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-18 02:28:36.602545 | orchestrator | 2026-03-18 02:28:36.602574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602587 | orchestrator | Wednesday 18 March 2026 02:28:29 +0000 (0:00:01.207) 0:00:10.378 ******* 2026-03-18 02:28:36.602599 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602611 | orchestrator | 2026-03-18 02:28:36.602623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602635 | orchestrator | Wednesday 18 March 2026 02:28:29 +0000 (0:00:00.219) 0:00:10.597 ******* 2026-03-18 02:28:36.602647 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602658 | orchestrator | 2026-03-18 02:28:36.602669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602681 | orchestrator | Wednesday 18 March 2026 02:28:29 +0000 (0:00:00.234) 0:00:10.832 ******* 2026-03-18 02:28:36.602693 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602704 | orchestrator | 2026-03-18 02:28:36.602716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:36.602727 | orchestrator | Wednesday 18 March 2026 02:28:30 +0000 (0:00:00.250) 0:00:11.082 ******* 2026-03-18 02:28:36.602739 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602750 | orchestrator | 2026-03-18 02:28:36.602761 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-18 02:28:36.602772 | orchestrator | Wednesday 18 March 2026 02:28:30 +0000 (0:00:00.217) 0:00:11.299 ******* 2026-03-18 02:28:36.602784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-18 02:28:36.602795 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-18 02:28:36.602807 | orchestrator | 2026-03-18 02:28:36.602829 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-18 02:28:36.602841 | orchestrator | Wednesday 18 March 2026 02:28:30 +0000 (0:00:00.193) 0:00:11.493 ******* 2026-03-18 02:28:36.602853 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602864 | orchestrator | 2026-03-18 02:28:36.602876 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-18 02:28:36.602888 | orchestrator | Wednesday 18 March 2026 02:28:30 +0000 (0:00:00.140) 0:00:11.633 ******* 2026-03-18 02:28:36.602900 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602934 | orchestrator | 2026-03-18 02:28:36.602947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-18 02:28:36.602958 | orchestrator | Wednesday 18 March 2026 02:28:30 +0000 (0:00:00.178) 0:00:11.812 ******* 2026-03-18 02:28:36.602970 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.602982 | orchestrator | 2026-03-18 02:28:36.602993 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-18 02:28:36.603005 | orchestrator | Wednesday 18 March 2026 02:28:31 +0000 (0:00:00.141) 0:00:11.953 ******* 2026-03-18 02:28:36.603016 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:36.603028 | orchestrator | 2026-03-18 02:28:36.603040 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-18 02:28:36.603051 | orchestrator | Wednesday 18 March 2026 02:28:31 +0000 (0:00:00.149) 0:00:12.103 ******* 2026-03-18 02:28:36.603063 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dcb28020-3d32-5af4-a4b7-0acc667eefcb'}}) 2026-03-18 02:28:36.603077 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a3797da-ebdd-566a-aa35-3713ec7e039a'}}) 2026-03-18 02:28:36.603088 | orchestrator | 2026-03-18 02:28:36.603099 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-18 02:28:36.603111 | orchestrator | Wednesday 18 March 2026 02:28:31 +0000 (0:00:00.177) 0:00:12.280 ******* 2026-03-18 02:28:36.603123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dcb28020-3d32-5af4-a4b7-0acc667eefcb'}})  2026-03-18 02:28:36.603137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a3797da-ebdd-566a-aa35-3713ec7e039a'}})  2026-03-18 02:28:36.603148 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603160 | orchestrator | 2026-03-18 02:28:36.603171 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-18 02:28:36.603183 | orchestrator | Wednesday 18 March 2026 02:28:31 +0000 (0:00:00.424) 0:00:12.705 ******* 2026-03-18 02:28:36.603193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dcb28020-3d32-5af4-a4b7-0acc667eefcb'}})  2026-03-18 02:28:36.603205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a3797da-ebdd-566a-aa35-3713ec7e039a'}})  2026-03-18 02:28:36.603216 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603227 | orchestrator | 2026-03-18 02:28:36.603239 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-18 02:28:36.603250 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.243) 0:00:12.948 ******* 2026-03-18 02:28:36.603261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dcb28020-3d32-5af4-a4b7-0acc667eefcb'}})  2026-03-18 02:28:36.603292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a3797da-ebdd-566a-aa35-3713ec7e039a'}})  2026-03-18 02:28:36.603304 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603316 | orchestrator | 2026-03-18 02:28:36.603327 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-18 02:28:36.603340 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.171) 0:00:13.120 ******* 2026-03-18 02:28:36.603351 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:36.603362 | orchestrator | 2026-03-18 02:28:36.603384 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-18 02:28:36.603395 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.157) 0:00:13.278 ******* 2026-03-18 02:28:36.603406 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:28:36.603418 | orchestrator | 2026-03-18 02:28:36.603436 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-18 02:28:36.603447 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.155) 0:00:13.433 ******* 2026-03-18 02:28:36.603481 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603501 | orchestrator | 2026-03-18 02:28:36.603513 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-18 02:28:36.603533 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.130) 0:00:13.564 ******* 2026-03-18 02:28:36.603545 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603557 | orchestrator | 2026-03-18 02:28:36.603568 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-18 02:28:36.603579 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.148) 0:00:13.712 ******* 2026-03-18 02:28:36.603590 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603601 | orchestrator | 2026-03-18 02:28:36.603612 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-18 02:28:36.603623 | orchestrator | Wednesday 18 March 2026 02:28:32 +0000 (0:00:00.146) 0:00:13.858 ******* 2026-03-18 02:28:36.603635 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:28:36.603647 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:28:36.603658 | orchestrator |  "sdb": { 2026-03-18 02:28:36.603669 | orchestrator |  "osd_lvm_uuid": "dcb28020-3d32-5af4-a4b7-0acc667eefcb" 2026-03-18 02:28:36.603680 | orchestrator |  }, 2026-03-18 02:28:36.603691 | orchestrator |  "sdc": { 2026-03-18 02:28:36.603702 | orchestrator |  "osd_lvm_uuid": "9a3797da-ebdd-566a-aa35-3713ec7e039a" 2026-03-18 02:28:36.603714 | orchestrator |  } 2026-03-18 02:28:36.603726 | orchestrator |  } 2026-03-18 02:28:36.603738 | orchestrator | } 2026-03-18 02:28:36.603749 | orchestrator | 2026-03-18 02:28:36.603760 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-18 02:28:36.603772 | orchestrator | Wednesday 18 March 2026 02:28:33 +0000 (0:00:00.169) 0:00:14.028 ******* 2026-03-18 02:28:36.603783 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603794 | orchestrator | 2026-03-18 02:28:36.603804 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-18 02:28:36.603815 | orchestrator | Wednesday 18 March 2026 02:28:33 +0000 (0:00:00.156) 0:00:14.185 ******* 2026-03-18 02:28:36.603826 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603837 | orchestrator | 2026-03-18 02:28:36.603847 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-18 02:28:36.603857 | orchestrator | Wednesday 18 March 2026 02:28:33 +0000 (0:00:00.145) 0:00:14.330 ******* 2026-03-18 02:28:36.603867 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:28:36.603877 | orchestrator | 2026-03-18 02:28:36.603888 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-18 02:28:36.603899 | orchestrator | Wednesday 18 March 2026 02:28:33 +0000 (0:00:00.149) 0:00:14.480 ******* 2026-03-18 02:28:36.603910 | orchestrator | changed: [testbed-node-3] => { 2026-03-18 02:28:36.603920 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-18 02:28:36.603930 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:28:36.603941 | orchestrator |  "sdb": { 2026-03-18 02:28:36.603951 | orchestrator |  "osd_lvm_uuid": "dcb28020-3d32-5af4-a4b7-0acc667eefcb" 2026-03-18 02:28:36.603961 | orchestrator |  }, 2026-03-18 02:28:36.603971 | orchestrator |  "sdc": { 2026-03-18 02:28:36.603982 | orchestrator |  "osd_lvm_uuid": "9a3797da-ebdd-566a-aa35-3713ec7e039a" 2026-03-18 02:28:36.603993 | orchestrator |  } 2026-03-18 02:28:36.604003 | orchestrator |  }, 2026-03-18 02:28:36.604014 | orchestrator |  "lvm_volumes": [ 2026-03-18 02:28:36.604024 | orchestrator |  { 2026-03-18 02:28:36.604035 | orchestrator |  "data": "osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb", 2026-03-18 02:28:36.604045 | orchestrator |  "data_vg": "ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb" 2026-03-18 02:28:36.604055 | orchestrator |  }, 2026-03-18 02:28:36.604066 | orchestrator |  { 2026-03-18 02:28:36.604077 | orchestrator |  "data": "osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a", 2026-03-18 02:28:36.604088 | orchestrator |  "data_vg": "ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a" 2026-03-18 02:28:36.604105 | orchestrator |  } 2026-03-18 02:28:36.604116 | orchestrator |  ] 2026-03-18 02:28:36.604126 | orchestrator |  } 2026-03-18 02:28:36.604136 | orchestrator | } 2026-03-18 02:28:36.604147 | orchestrator | 2026-03-18 02:28:36.604157 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-18 02:28:36.604168 | orchestrator | Wednesday 18 March 2026 02:28:34 +0000 (0:00:00.461) 0:00:14.942 ******* 2026-03-18 02:28:36.604178 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 02:28:36.604189 | orchestrator | 2026-03-18 02:28:36.604199 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-18 02:28:36.604209 | orchestrator | 2026-03-18 02:28:36.604219 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:28:36.604230 | orchestrator | Wednesday 18 March 2026 02:28:36 +0000 (0:00:01.999) 0:00:16.942 ******* 2026-03-18 02:28:36.604240 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-18 02:28:36.604252 | orchestrator | 2026-03-18 02:28:36.604262 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:28:36.604272 | orchestrator | Wednesday 18 March 2026 02:28:36 +0000 (0:00:00.292) 0:00:17.235 ******* 2026-03-18 02:28:36.604282 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:36.604292 | orchestrator | 2026-03-18 02:28:36.604309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.855962 | orchestrator | Wednesday 18 March 2026 02:28:36 +0000 (0:00:00.265) 0:00:17.501 ******* 2026-03-18 02:28:46.856070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-18 02:28:46.856081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-18 02:28:46.856088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-18 02:28:46.856096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-18 02:28:46.856116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-18 02:28:46.856131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-18 02:28:46.856139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-18 02:28:46.856146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-18 02:28:46.856153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-18 02:28:46.856160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-18 02:28:46.856167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-18 02:28:46.856174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-18 02:28:46.856181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-18 02:28:46.856188 | orchestrator | 2026-03-18 02:28:46.856195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856202 | orchestrator | Wednesday 18 March 2026 02:28:37 +0000 (0:00:00.432) 0:00:17.933 ******* 2026-03-18 02:28:46.856209 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856217 | orchestrator | 2026-03-18 02:28:46.856224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856231 | orchestrator | Wednesday 18 March 2026 02:28:37 +0000 (0:00:00.221) 0:00:18.154 ******* 2026-03-18 02:28:46.856237 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856244 | orchestrator | 2026-03-18 02:28:46.856251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856257 | orchestrator | Wednesday 18 March 2026 02:28:37 +0000 (0:00:00.236) 0:00:18.390 ******* 2026-03-18 02:28:46.856264 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856288 | orchestrator | 2026-03-18 02:28:46.856295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856302 | orchestrator | Wednesday 18 March 2026 02:28:37 +0000 (0:00:00.211) 0:00:18.602 ******* 2026-03-18 02:28:46.856308 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856315 | orchestrator | 2026-03-18 02:28:46.856321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856328 | orchestrator | Wednesday 18 March 2026 02:28:38 +0000 (0:00:00.740) 0:00:19.343 ******* 2026-03-18 02:28:46.856334 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856341 | orchestrator | 2026-03-18 02:28:46.856348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856354 | orchestrator | Wednesday 18 March 2026 02:28:38 +0000 (0:00:00.222) 0:00:19.565 ******* 2026-03-18 02:28:46.856361 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856367 | orchestrator | 2026-03-18 02:28:46.856374 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856381 | orchestrator | Wednesday 18 March 2026 02:28:38 +0000 (0:00:00.223) 0:00:19.789 ******* 2026-03-18 02:28:46.856387 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856394 | orchestrator | 2026-03-18 02:28:46.856400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856407 | orchestrator | Wednesday 18 March 2026 02:28:39 +0000 (0:00:00.221) 0:00:20.011 ******* 2026-03-18 02:28:46.856413 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856420 | orchestrator | 2026-03-18 02:28:46.856426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856433 | orchestrator | Wednesday 18 March 2026 02:28:39 +0000 (0:00:00.222) 0:00:20.233 ******* 2026-03-18 02:28:46.856440 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d) 2026-03-18 02:28:46.856502 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d) 2026-03-18 02:28:46.856509 | orchestrator | 2026-03-18 02:28:46.856517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856525 | orchestrator | Wednesday 18 March 2026 02:28:39 +0000 (0:00:00.487) 0:00:20.721 ******* 2026-03-18 02:28:46.856533 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc) 2026-03-18 02:28:46.856540 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc) 2026-03-18 02:28:46.856548 | orchestrator | 2026-03-18 02:28:46.856556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856564 | orchestrator | Wednesday 18 March 2026 02:28:40 +0000 (0:00:00.470) 0:00:21.192 ******* 2026-03-18 02:28:46.856572 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a) 2026-03-18 02:28:46.856580 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a) 2026-03-18 02:28:46.856588 | orchestrator | 2026-03-18 02:28:46.856595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856616 | orchestrator | Wednesday 18 March 2026 02:28:40 +0000 (0:00:00.477) 0:00:21.669 ******* 2026-03-18 02:28:46.856625 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a) 2026-03-18 02:28:46.856633 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a) 2026-03-18 02:28:46.856641 | orchestrator | 2026-03-18 02:28:46.856648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:46.856655 | orchestrator | Wednesday 18 March 2026 02:28:41 +0000 (0:00:00.745) 0:00:22.414 ******* 2026-03-18 02:28:46.856666 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:28:46.856673 | orchestrator | 2026-03-18 02:28:46.856680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856693 | orchestrator | Wednesday 18 March 2026 02:28:42 +0000 (0:00:00.677) 0:00:23.092 ******* 2026-03-18 02:28:46.856700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-18 02:28:46.856706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-18 02:28:46.856713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-18 02:28:46.856719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-18 02:28:46.856726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-18 02:28:46.856732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-18 02:28:46.856752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-18 02:28:46.856767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-18 02:28:46.856774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-18 02:28:46.856780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-18 02:28:46.856787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-18 02:28:46.856794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-18 02:28:46.856800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-18 02:28:46.856807 | orchestrator | 2026-03-18 02:28:46.856814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856820 | orchestrator | Wednesday 18 March 2026 02:28:43 +0000 (0:00:00.985) 0:00:24.078 ******* 2026-03-18 02:28:46.856827 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856833 | orchestrator | 2026-03-18 02:28:46.856840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856847 | orchestrator | Wednesday 18 March 2026 02:28:43 +0000 (0:00:00.236) 0:00:24.314 ******* 2026-03-18 02:28:46.856853 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856860 | orchestrator | 2026-03-18 02:28:46.856866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856873 | orchestrator | Wednesday 18 March 2026 02:28:43 +0000 (0:00:00.240) 0:00:24.555 ******* 2026-03-18 02:28:46.856880 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856886 | orchestrator | 2026-03-18 02:28:46.856893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856900 | orchestrator | Wednesday 18 March 2026 02:28:43 +0000 (0:00:00.234) 0:00:24.789 ******* 2026-03-18 02:28:46.856907 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856913 | orchestrator | 2026-03-18 02:28:46.856919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856926 | orchestrator | Wednesday 18 March 2026 02:28:44 +0000 (0:00:00.238) 0:00:25.028 ******* 2026-03-18 02:28:46.856933 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856939 | orchestrator | 2026-03-18 02:28:46.856946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856953 | orchestrator | Wednesday 18 March 2026 02:28:44 +0000 (0:00:00.247) 0:00:25.275 ******* 2026-03-18 02:28:46.856959 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856966 | orchestrator | 2026-03-18 02:28:46.856972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.856979 | orchestrator | Wednesday 18 March 2026 02:28:44 +0000 (0:00:00.252) 0:00:25.528 ******* 2026-03-18 02:28:46.856985 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.856992 | orchestrator | 2026-03-18 02:28:46.856998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.857010 | orchestrator | Wednesday 18 March 2026 02:28:44 +0000 (0:00:00.229) 0:00:25.758 ******* 2026-03-18 02:28:46.857017 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:46.857023 | orchestrator | 2026-03-18 02:28:46.857030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.857036 | orchestrator | Wednesday 18 March 2026 02:28:45 +0000 (0:00:00.236) 0:00:25.994 ******* 2026-03-18 02:28:46.857043 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-18 02:28:46.857051 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-18 02:28:46.857058 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-18 02:28:46.857064 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-18 02:28:46.857071 | orchestrator | 2026-03-18 02:28:46.857077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:46.857084 | orchestrator | Wednesday 18 March 2026 02:28:46 +0000 (0:00:00.982) 0:00:26.977 ******* 2026-03-18 02:28:46.857091 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613727 | orchestrator | 2026-03-18 02:28:53.613812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:53.613823 | orchestrator | Wednesday 18 March 2026 02:28:46 +0000 (0:00:00.778) 0:00:27.755 ******* 2026-03-18 02:28:53.613830 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613838 | orchestrator | 2026-03-18 02:28:53.613844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:53.613850 | orchestrator | Wednesday 18 March 2026 02:28:47 +0000 (0:00:00.243) 0:00:27.999 ******* 2026-03-18 02:28:53.613857 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613862 | orchestrator | 2026-03-18 02:28:53.613868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:28:53.613887 | orchestrator | Wednesday 18 March 2026 02:28:47 +0000 (0:00:00.231) 0:00:28.231 ******* 2026-03-18 02:28:53.613893 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613899 | orchestrator | 2026-03-18 02:28:53.613905 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-18 02:28:53.613911 | orchestrator | Wednesday 18 March 2026 02:28:47 +0000 (0:00:00.225) 0:00:28.456 ******* 2026-03-18 02:28:53.613917 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-18 02:28:53.613922 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-18 02:28:53.613928 | orchestrator | 2026-03-18 02:28:53.613934 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-18 02:28:53.613940 | orchestrator | Wednesday 18 March 2026 02:28:47 +0000 (0:00:00.197) 0:00:28.654 ******* 2026-03-18 02:28:53.613946 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613952 | orchestrator | 2026-03-18 02:28:53.613957 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-18 02:28:53.613963 | orchestrator | Wednesday 18 March 2026 02:28:47 +0000 (0:00:00.141) 0:00:28.796 ******* 2026-03-18 02:28:53.613969 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613975 | orchestrator | 2026-03-18 02:28:53.613981 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-18 02:28:53.613986 | orchestrator | Wednesday 18 March 2026 02:28:48 +0000 (0:00:00.146) 0:00:28.943 ******* 2026-03-18 02:28:53.613992 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.613998 | orchestrator | 2026-03-18 02:28:53.614004 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-18 02:28:53.614010 | orchestrator | Wednesday 18 March 2026 02:28:48 +0000 (0:00:00.150) 0:00:29.093 ******* 2026-03-18 02:28:53.614067 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:53.614080 | orchestrator | 2026-03-18 02:28:53.614090 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-18 02:28:53.614100 | orchestrator | Wednesday 18 March 2026 02:28:48 +0000 (0:00:00.176) 0:00:29.269 ******* 2026-03-18 02:28:53.614111 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd0e002fd-9a73-564c-a03c-ee3a79d477af'}}) 2026-03-18 02:28:53.614143 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab16e1e8-130f-595d-96ba-aeefaeb1133d'}}) 2026-03-18 02:28:53.614153 | orchestrator | 2026-03-18 02:28:53.614181 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-18 02:28:53.614193 | orchestrator | Wednesday 18 March 2026 02:28:48 +0000 (0:00:00.180) 0:00:29.450 ******* 2026-03-18 02:28:53.614203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd0e002fd-9a73-564c-a03c-ee3a79d477af'}})  2026-03-18 02:28:53.614215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab16e1e8-130f-595d-96ba-aeefaeb1133d'}})  2026-03-18 02:28:53.614225 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614234 | orchestrator | 2026-03-18 02:28:53.614244 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-18 02:28:53.614254 | orchestrator | Wednesday 18 March 2026 02:28:48 +0000 (0:00:00.186) 0:00:29.637 ******* 2026-03-18 02:28:53.614264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd0e002fd-9a73-564c-a03c-ee3a79d477af'}})  2026-03-18 02:28:53.614275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab16e1e8-130f-595d-96ba-aeefaeb1133d'}})  2026-03-18 02:28:53.614285 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614296 | orchestrator | 2026-03-18 02:28:53.614306 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-18 02:28:53.614317 | orchestrator | Wednesday 18 March 2026 02:28:49 +0000 (0:00:00.434) 0:00:30.072 ******* 2026-03-18 02:28:53.614328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd0e002fd-9a73-564c-a03c-ee3a79d477af'}})  2026-03-18 02:28:53.614340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab16e1e8-130f-595d-96ba-aeefaeb1133d'}})  2026-03-18 02:28:53.614350 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614360 | orchestrator | 2026-03-18 02:28:53.614371 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-18 02:28:53.614382 | orchestrator | Wednesday 18 March 2026 02:28:49 +0000 (0:00:00.191) 0:00:30.263 ******* 2026-03-18 02:28:53.614391 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:53.614398 | orchestrator | 2026-03-18 02:28:53.614405 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-18 02:28:53.614412 | orchestrator | Wednesday 18 March 2026 02:28:49 +0000 (0:00:00.158) 0:00:30.422 ******* 2026-03-18 02:28:53.614419 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:28:53.614425 | orchestrator | 2026-03-18 02:28:53.614432 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-18 02:28:53.614487 | orchestrator | Wednesday 18 March 2026 02:28:49 +0000 (0:00:00.183) 0:00:30.606 ******* 2026-03-18 02:28:53.614516 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614527 | orchestrator | 2026-03-18 02:28:53.614537 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-18 02:28:53.614547 | orchestrator | Wednesday 18 March 2026 02:28:49 +0000 (0:00:00.183) 0:00:30.790 ******* 2026-03-18 02:28:53.614557 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614567 | orchestrator | 2026-03-18 02:28:53.614577 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-18 02:28:53.614588 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.165) 0:00:30.956 ******* 2026-03-18 02:28:53.614598 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614609 | orchestrator | 2026-03-18 02:28:53.614626 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-18 02:28:53.614636 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.175) 0:00:31.132 ******* 2026-03-18 02:28:53.614645 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:28:53.614655 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:28:53.614678 | orchestrator |  "sdb": { 2026-03-18 02:28:53.614687 | orchestrator |  "osd_lvm_uuid": "d0e002fd-9a73-564c-a03c-ee3a79d477af" 2026-03-18 02:28:53.614697 | orchestrator |  }, 2026-03-18 02:28:53.614707 | orchestrator |  "sdc": { 2026-03-18 02:28:53.614716 | orchestrator |  "osd_lvm_uuid": "ab16e1e8-130f-595d-96ba-aeefaeb1133d" 2026-03-18 02:28:53.614726 | orchestrator |  } 2026-03-18 02:28:53.614735 | orchestrator |  } 2026-03-18 02:28:53.614745 | orchestrator | } 2026-03-18 02:28:53.614756 | orchestrator | 2026-03-18 02:28:53.614767 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-18 02:28:53.614778 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.163) 0:00:31.295 ******* 2026-03-18 02:28:53.614787 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614796 | orchestrator | 2026-03-18 02:28:53.614807 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-18 02:28:53.614817 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.188) 0:00:31.484 ******* 2026-03-18 02:28:53.614829 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614839 | orchestrator | 2026-03-18 02:28:53.614849 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-18 02:28:53.614859 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.164) 0:00:31.649 ******* 2026-03-18 02:28:53.614869 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:28:53.614879 | orchestrator | 2026-03-18 02:28:53.614889 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-18 02:28:53.614899 | orchestrator | Wednesday 18 March 2026 02:28:50 +0000 (0:00:00.140) 0:00:31.789 ******* 2026-03-18 02:28:53.614911 | orchestrator | changed: [testbed-node-4] => { 2026-03-18 02:28:53.614922 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-18 02:28:53.614932 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:28:53.614942 | orchestrator |  "sdb": { 2026-03-18 02:28:53.614953 | orchestrator |  "osd_lvm_uuid": "d0e002fd-9a73-564c-a03c-ee3a79d477af" 2026-03-18 02:28:53.614964 | orchestrator |  }, 2026-03-18 02:28:53.614975 | orchestrator |  "sdc": { 2026-03-18 02:28:53.614984 | orchestrator |  "osd_lvm_uuid": "ab16e1e8-130f-595d-96ba-aeefaeb1133d" 2026-03-18 02:28:53.614995 | orchestrator |  } 2026-03-18 02:28:53.615005 | orchestrator |  }, 2026-03-18 02:28:53.615014 | orchestrator |  "lvm_volumes": [ 2026-03-18 02:28:53.615025 | orchestrator |  { 2026-03-18 02:28:53.615036 | orchestrator |  "data": "osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af", 2026-03-18 02:28:53.615047 | orchestrator |  "data_vg": "ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af" 2026-03-18 02:28:53.615058 | orchestrator |  }, 2026-03-18 02:28:53.615068 | orchestrator |  { 2026-03-18 02:28:53.615080 | orchestrator |  "data": "osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d", 2026-03-18 02:28:53.615091 | orchestrator |  "data_vg": "ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d" 2026-03-18 02:28:53.615102 | orchestrator |  } 2026-03-18 02:28:53.615113 | orchestrator |  ] 2026-03-18 02:28:53.615125 | orchestrator |  } 2026-03-18 02:28:53.615137 | orchestrator | } 2026-03-18 02:28:53.615148 | orchestrator | 2026-03-18 02:28:53.615159 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-18 02:28:53.615170 | orchestrator | Wednesday 18 March 2026 02:28:51 +0000 (0:00:00.489) 0:00:32.279 ******* 2026-03-18 02:28:53.615180 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-18 02:28:53.615191 | orchestrator | 2026-03-18 02:28:53.615202 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-18 02:28:53.615213 | orchestrator | 2026-03-18 02:28:53.615224 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:28:53.615236 | orchestrator | Wednesday 18 March 2026 02:28:52 +0000 (0:00:01.244) 0:00:33.523 ******* 2026-03-18 02:28:53.615246 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-18 02:28:53.615265 | orchestrator | 2026-03-18 02:28:53.615276 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:28:53.615287 | orchestrator | Wednesday 18 March 2026 02:28:52 +0000 (0:00:00.284) 0:00:33.807 ******* 2026-03-18 02:28:53.615298 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:28:53.615309 | orchestrator | 2026-03-18 02:28:53.615320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:28:53.615331 | orchestrator | Wednesday 18 March 2026 02:28:53 +0000 (0:00:00.282) 0:00:34.089 ******* 2026-03-18 02:28:53.615342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-18 02:28:53.615352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-18 02:28:53.615363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-18 02:28:53.615374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-18 02:28:53.615385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-18 02:28:53.615407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-18 02:29:03.096809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-18 02:29:03.096937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-18 02:29:03.096962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-18 02:29:03.096980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-18 02:29:03.096998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-18 02:29:03.097037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-18 02:29:03.097057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-18 02:29:03.097076 | orchestrator | 2026-03-18 02:29:03.097095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097115 | orchestrator | Wednesday 18 March 2026 02:28:53 +0000 (0:00:00.420) 0:00:34.509 ******* 2026-03-18 02:29:03.097132 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097151 | orchestrator | 2026-03-18 02:29:03.097162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097172 | orchestrator | Wednesday 18 March 2026 02:28:53 +0000 (0:00:00.226) 0:00:34.736 ******* 2026-03-18 02:29:03.097181 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097191 | orchestrator | 2026-03-18 02:29:03.097201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097210 | orchestrator | Wednesday 18 March 2026 02:28:54 +0000 (0:00:00.252) 0:00:34.988 ******* 2026-03-18 02:29:03.097220 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097230 | orchestrator | 2026-03-18 02:29:03.097239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097249 | orchestrator | Wednesday 18 March 2026 02:28:54 +0000 (0:00:00.234) 0:00:35.223 ******* 2026-03-18 02:29:03.097259 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097268 | orchestrator | 2026-03-18 02:29:03.097278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097287 | orchestrator | Wednesday 18 March 2026 02:28:55 +0000 (0:00:00.692) 0:00:35.916 ******* 2026-03-18 02:29:03.097297 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097307 | orchestrator | 2026-03-18 02:29:03.097317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097326 | orchestrator | Wednesday 18 March 2026 02:28:55 +0000 (0:00:00.236) 0:00:36.153 ******* 2026-03-18 02:29:03.097336 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097368 | orchestrator | 2026-03-18 02:29:03.097380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097392 | orchestrator | Wednesday 18 March 2026 02:28:55 +0000 (0:00:00.230) 0:00:36.383 ******* 2026-03-18 02:29:03.097403 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097415 | orchestrator | 2026-03-18 02:29:03.097426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097529 | orchestrator | Wednesday 18 March 2026 02:28:55 +0000 (0:00:00.227) 0:00:36.611 ******* 2026-03-18 02:29:03.097541 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.097553 | orchestrator | 2026-03-18 02:29:03.097564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097576 | orchestrator | Wednesday 18 March 2026 02:28:55 +0000 (0:00:00.249) 0:00:36.860 ******* 2026-03-18 02:29:03.097586 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403) 2026-03-18 02:29:03.097597 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403) 2026-03-18 02:29:03.097607 | orchestrator | 2026-03-18 02:29:03.097617 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097626 | orchestrator | Wednesday 18 March 2026 02:28:56 +0000 (0:00:00.458) 0:00:37.319 ******* 2026-03-18 02:29:03.097636 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00) 2026-03-18 02:29:03.097646 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00) 2026-03-18 02:29:03.097655 | orchestrator | 2026-03-18 02:29:03.097665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097675 | orchestrator | Wednesday 18 March 2026 02:28:56 +0000 (0:00:00.455) 0:00:37.774 ******* 2026-03-18 02:29:03.097688 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568) 2026-03-18 02:29:03.097705 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568) 2026-03-18 02:29:03.097721 | orchestrator | 2026-03-18 02:29:03.097736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097752 | orchestrator | Wednesday 18 March 2026 02:28:57 +0000 (0:00:00.482) 0:00:38.257 ******* 2026-03-18 02:29:03.097766 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216) 2026-03-18 02:29:03.097784 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216) 2026-03-18 02:29:03.097802 | orchestrator | 2026-03-18 02:29:03.097818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:29:03.097835 | orchestrator | Wednesday 18 March 2026 02:28:57 +0000 (0:00:00.446) 0:00:38.703 ******* 2026-03-18 02:29:03.097848 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:29:03.097858 | orchestrator | 2026-03-18 02:29:03.097867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.097897 | orchestrator | Wednesday 18 March 2026 02:28:58 +0000 (0:00:00.386) 0:00:39.089 ******* 2026-03-18 02:29:03.097907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-18 02:29:03.097917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-18 02:29:03.097926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-18 02:29:03.097936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-18 02:29:03.097954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-18 02:29:03.097964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-18 02:29:03.097973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-18 02:29:03.097997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-18 02:29:03.098006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-18 02:29:03.098078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-18 02:29:03.098093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-18 02:29:03.098109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-18 02:29:03.098126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-18 02:29:03.098141 | orchestrator | 2026-03-18 02:29:03.098156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098172 | orchestrator | Wednesday 18 March 2026 02:28:58 +0000 (0:00:00.686) 0:00:39.776 ******* 2026-03-18 02:29:03.098190 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098205 | orchestrator | 2026-03-18 02:29:03.098223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098240 | orchestrator | Wednesday 18 March 2026 02:28:59 +0000 (0:00:00.243) 0:00:40.019 ******* 2026-03-18 02:29:03.098256 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098272 | orchestrator | 2026-03-18 02:29:03.098282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098291 | orchestrator | Wednesday 18 March 2026 02:28:59 +0000 (0:00:00.216) 0:00:40.235 ******* 2026-03-18 02:29:03.098301 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098311 | orchestrator | 2026-03-18 02:29:03.098320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098330 | orchestrator | Wednesday 18 March 2026 02:28:59 +0000 (0:00:00.223) 0:00:40.459 ******* 2026-03-18 02:29:03.098339 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098349 | orchestrator | 2026-03-18 02:29:03.098358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098368 | orchestrator | Wednesday 18 March 2026 02:28:59 +0000 (0:00:00.222) 0:00:40.681 ******* 2026-03-18 02:29:03.098377 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098386 | orchestrator | 2026-03-18 02:29:03.098396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098405 | orchestrator | Wednesday 18 March 2026 02:28:59 +0000 (0:00:00.216) 0:00:40.897 ******* 2026-03-18 02:29:03.098415 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098424 | orchestrator | 2026-03-18 02:29:03.098463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098475 | orchestrator | Wednesday 18 March 2026 02:29:00 +0000 (0:00:00.236) 0:00:41.134 ******* 2026-03-18 02:29:03.098485 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098494 | orchestrator | 2026-03-18 02:29:03.098504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098513 | orchestrator | Wednesday 18 March 2026 02:29:00 +0000 (0:00:00.228) 0:00:41.363 ******* 2026-03-18 02:29:03.098523 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098532 | orchestrator | 2026-03-18 02:29:03.098542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098551 | orchestrator | Wednesday 18 March 2026 02:29:00 +0000 (0:00:00.230) 0:00:41.593 ******* 2026-03-18 02:29:03.098561 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-18 02:29:03.098570 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-18 02:29:03.098580 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-18 02:29:03.098590 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-18 02:29:03.098599 | orchestrator | 2026-03-18 02:29:03.098609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098619 | orchestrator | Wednesday 18 March 2026 02:29:01 +0000 (0:00:00.928) 0:00:42.522 ******* 2026-03-18 02:29:03.098637 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098646 | orchestrator | 2026-03-18 02:29:03.098656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098665 | orchestrator | Wednesday 18 March 2026 02:29:01 +0000 (0:00:00.249) 0:00:42.772 ******* 2026-03-18 02:29:03.098675 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098684 | orchestrator | 2026-03-18 02:29:03.098694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098703 | orchestrator | Wednesday 18 March 2026 02:29:02 +0000 (0:00:00.221) 0:00:42.993 ******* 2026-03-18 02:29:03.098712 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098722 | orchestrator | 2026-03-18 02:29:03.098731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:29:03.098741 | orchestrator | Wednesday 18 March 2026 02:29:02 +0000 (0:00:00.773) 0:00:43.767 ******* 2026-03-18 02:29:03.098750 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:03.098760 | orchestrator | 2026-03-18 02:29:03.098778 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-18 02:29:07.555221 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.228) 0:00:43.996 ******* 2026-03-18 02:29:07.555330 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-18 02:29:07.555345 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-18 02:29:07.555357 | orchestrator | 2026-03-18 02:29:07.555369 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-18 02:29:07.555380 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.201) 0:00:44.197 ******* 2026-03-18 02:29:07.555402 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.555504 | orchestrator | 2026-03-18 02:29:07.555531 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-18 02:29:07.555551 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.130) 0:00:44.328 ******* 2026-03-18 02:29:07.555571 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.555592 | orchestrator | 2026-03-18 02:29:07.555613 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-18 02:29:07.555633 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.152) 0:00:44.480 ******* 2026-03-18 02:29:07.555653 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.555674 | orchestrator | 2026-03-18 02:29:07.555694 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-18 02:29:07.555715 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.179) 0:00:44.660 ******* 2026-03-18 02:29:07.555736 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:29:07.555757 | orchestrator | 2026-03-18 02:29:07.555774 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-18 02:29:07.555788 | orchestrator | Wednesday 18 March 2026 02:29:03 +0000 (0:00:00.169) 0:00:44.830 ******* 2026-03-18 02:29:07.555802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'def37aef-ab10-5729-81f7-b9371c5efcea'}}) 2026-03-18 02:29:07.555816 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f498c8c9-64fb-5c46-ab13-dfed2090c41f'}}) 2026-03-18 02:29:07.555829 | orchestrator | 2026-03-18 02:29:07.555841 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-18 02:29:07.555854 | orchestrator | Wednesday 18 March 2026 02:29:04 +0000 (0:00:00.208) 0:00:45.038 ******* 2026-03-18 02:29:07.555868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'def37aef-ab10-5729-81f7-b9371c5efcea'}})  2026-03-18 02:29:07.555882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f498c8c9-64fb-5c46-ab13-dfed2090c41f'}})  2026-03-18 02:29:07.555894 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.555907 | orchestrator | 2026-03-18 02:29:07.555919 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-18 02:29:07.555953 | orchestrator | Wednesday 18 March 2026 02:29:04 +0000 (0:00:00.164) 0:00:45.203 ******* 2026-03-18 02:29:07.555965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'def37aef-ab10-5729-81f7-b9371c5efcea'}})  2026-03-18 02:29:07.555976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f498c8c9-64fb-5c46-ab13-dfed2090c41f'}})  2026-03-18 02:29:07.555987 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.555998 | orchestrator | 2026-03-18 02:29:07.556009 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-18 02:29:07.556020 | orchestrator | Wednesday 18 March 2026 02:29:04 +0000 (0:00:00.172) 0:00:45.376 ******* 2026-03-18 02:29:07.556031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'def37aef-ab10-5729-81f7-b9371c5efcea'}})  2026-03-18 02:29:07.556042 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f498c8c9-64fb-5c46-ab13-dfed2090c41f'}})  2026-03-18 02:29:07.556053 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556063 | orchestrator | 2026-03-18 02:29:07.556074 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-18 02:29:07.556085 | orchestrator | Wednesday 18 March 2026 02:29:04 +0000 (0:00:00.162) 0:00:45.538 ******* 2026-03-18 02:29:07.556096 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:29:07.556106 | orchestrator | 2026-03-18 02:29:07.556117 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-18 02:29:07.556128 | orchestrator | Wednesday 18 March 2026 02:29:04 +0000 (0:00:00.186) 0:00:45.724 ******* 2026-03-18 02:29:07.556139 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:29:07.556150 | orchestrator | 2026-03-18 02:29:07.556161 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-18 02:29:07.556171 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.399) 0:00:46.124 ******* 2026-03-18 02:29:07.556182 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556193 | orchestrator | 2026-03-18 02:29:07.556204 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-18 02:29:07.556214 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.161) 0:00:46.286 ******* 2026-03-18 02:29:07.556225 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556236 | orchestrator | 2026-03-18 02:29:07.556247 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-18 02:29:07.556258 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.165) 0:00:46.452 ******* 2026-03-18 02:29:07.556269 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556280 | orchestrator | 2026-03-18 02:29:07.556290 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-18 02:29:07.556301 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.142) 0:00:46.594 ******* 2026-03-18 02:29:07.556312 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:29:07.556323 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:29:07.556334 | orchestrator |  "sdb": { 2026-03-18 02:29:07.556365 | orchestrator |  "osd_lvm_uuid": "def37aef-ab10-5729-81f7-b9371c5efcea" 2026-03-18 02:29:07.556377 | orchestrator |  }, 2026-03-18 02:29:07.556388 | orchestrator |  "sdc": { 2026-03-18 02:29:07.556399 | orchestrator |  "osd_lvm_uuid": "f498c8c9-64fb-5c46-ab13-dfed2090c41f" 2026-03-18 02:29:07.556410 | orchestrator |  } 2026-03-18 02:29:07.556421 | orchestrator |  } 2026-03-18 02:29:07.556463 | orchestrator | } 2026-03-18 02:29:07.556476 | orchestrator | 2026-03-18 02:29:07.556486 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-18 02:29:07.556497 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.142) 0:00:46.737 ******* 2026-03-18 02:29:07.556516 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556527 | orchestrator | 2026-03-18 02:29:07.556538 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-18 02:29:07.556557 | orchestrator | Wednesday 18 March 2026 02:29:05 +0000 (0:00:00.144) 0:00:46.882 ******* 2026-03-18 02:29:07.556567 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556578 | orchestrator | 2026-03-18 02:29:07.556589 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-18 02:29:07.556599 | orchestrator | Wednesday 18 March 2026 02:29:06 +0000 (0:00:00.153) 0:00:47.035 ******* 2026-03-18 02:29:07.556610 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:29:07.556621 | orchestrator | 2026-03-18 02:29:07.556632 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-18 02:29:07.556642 | orchestrator | Wednesday 18 March 2026 02:29:06 +0000 (0:00:00.131) 0:00:47.167 ******* 2026-03-18 02:29:07.556653 | orchestrator | changed: [testbed-node-5] => { 2026-03-18 02:29:07.556665 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-18 02:29:07.556676 | orchestrator |  "ceph_osd_devices": { 2026-03-18 02:29:07.556687 | orchestrator |  "sdb": { 2026-03-18 02:29:07.556698 | orchestrator |  "osd_lvm_uuid": "def37aef-ab10-5729-81f7-b9371c5efcea" 2026-03-18 02:29:07.556709 | orchestrator |  }, 2026-03-18 02:29:07.556719 | orchestrator |  "sdc": { 2026-03-18 02:29:07.556730 | orchestrator |  "osd_lvm_uuid": "f498c8c9-64fb-5c46-ab13-dfed2090c41f" 2026-03-18 02:29:07.556741 | orchestrator |  } 2026-03-18 02:29:07.556752 | orchestrator |  }, 2026-03-18 02:29:07.556762 | orchestrator |  "lvm_volumes": [ 2026-03-18 02:29:07.556773 | orchestrator |  { 2026-03-18 02:29:07.556784 | orchestrator |  "data": "osd-block-def37aef-ab10-5729-81f7-b9371c5efcea", 2026-03-18 02:29:07.556795 | orchestrator |  "data_vg": "ceph-def37aef-ab10-5729-81f7-b9371c5efcea" 2026-03-18 02:29:07.556805 | orchestrator |  }, 2026-03-18 02:29:07.556816 | orchestrator |  { 2026-03-18 02:29:07.556827 | orchestrator |  "data": "osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f", 2026-03-18 02:29:07.556837 | orchestrator |  "data_vg": "ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f" 2026-03-18 02:29:07.556849 | orchestrator |  } 2026-03-18 02:29:07.556867 | orchestrator |  ] 2026-03-18 02:29:07.556895 | orchestrator |  } 2026-03-18 02:29:07.556914 | orchestrator | } 2026-03-18 02:29:07.556932 | orchestrator | 2026-03-18 02:29:07.556955 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-18 02:29:07.556978 | orchestrator | Wednesday 18 March 2026 02:29:06 +0000 (0:00:00.230) 0:00:47.397 ******* 2026-03-18 02:29:07.556996 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-18 02:29:07.557014 | orchestrator | 2026-03-18 02:29:07.557033 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:29:07.557053 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:29:07.557073 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:29:07.557092 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:29:07.557111 | orchestrator | 2026-03-18 02:29:07.557128 | orchestrator | 2026-03-18 02:29:07.557146 | orchestrator | 2026-03-18 02:29:07.557164 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:29:07.557183 | orchestrator | Wednesday 18 March 2026 02:29:07 +0000 (0:00:01.044) 0:00:48.442 ******* 2026-03-18 02:29:07.557203 | orchestrator | =============================================================================== 2026-03-18 02:29:07.557221 | orchestrator | Write configuration file ------------------------------------------------ 4.29s 2026-03-18 02:29:07.557239 | orchestrator | Add known partitions to the list of available block devices ------------- 2.12s 2026-03-18 02:29:07.557258 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2026-03-18 02:29:07.557288 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-03-18 02:29:07.557307 | orchestrator | Print configuration data ------------------------------------------------ 1.18s 2026-03-18 02:29:07.557326 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-03-18 02:29:07.557345 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-03-18 02:29:07.557364 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-03-18 02:29:07.557383 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2026-03-18 02:29:07.557401 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.86s 2026-03-18 02:29:07.557419 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.85s 2026-03-18 02:29:07.557505 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-18 02:29:07.557525 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-18 02:29:07.557556 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.78s 2026-03-18 02:29:08.043134 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-03-18 02:29:08.043207 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-18 02:29:08.043214 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-18 02:29:08.043220 | orchestrator | Set OSD devices config data --------------------------------------------- 0.74s 2026-03-18 02:29:08.043240 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-18 02:29:08.043245 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-18 02:29:30.750109 | orchestrator | 2026-03-18 02:29:30 | INFO  | Task 9a29511b-f9c3-4812-9b7e-bdcfa39fe1f4 (sync inventory) is running in background. Output coming soon. 2026-03-18 02:30:02.072560 | orchestrator | 2026-03-18 02:29:32 | INFO  | Starting group_vars file reorganization 2026-03-18 02:30:02.072694 | orchestrator | 2026-03-18 02:29:32 | INFO  | Moved 0 file(s) to their respective directories 2026-03-18 02:30:02.072722 | orchestrator | 2026-03-18 02:29:32 | INFO  | Group_vars file reorganization completed 2026-03-18 02:30:02.072742 | orchestrator | 2026-03-18 02:29:35 | INFO  | Starting variable preparation from inventory 2026-03-18 02:30:02.072762 | orchestrator | 2026-03-18 02:29:38 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-18 02:30:02.072782 | orchestrator | 2026-03-18 02:29:38 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-18 02:30:02.072801 | orchestrator | 2026-03-18 02:29:38 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-18 02:30:02.072821 | orchestrator | 2026-03-18 02:29:38 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-18 02:30:02.072840 | orchestrator | 2026-03-18 02:29:38 | INFO  | Variable preparation completed 2026-03-18 02:30:02.072860 | orchestrator | 2026-03-18 02:29:40 | INFO  | Starting inventory overwrite handling 2026-03-18 02:30:02.072878 | orchestrator | 2026-03-18 02:29:40 | INFO  | Handling group overwrites in 99-overwrite 2026-03-18 02:30:02.072896 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removing group frr:children from 60-generic 2026-03-18 02:30:02.072914 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-18 02:30:02.072933 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-18 02:30:02.072954 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-18 02:30:02.073010 | orchestrator | 2026-03-18 02:29:40 | INFO  | Handling group overwrites in 20-roles 2026-03-18 02:30:02.073030 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-18 02:30:02.073051 | orchestrator | 2026-03-18 02:29:40 | INFO  | Removed 5 group(s) in total 2026-03-18 02:30:02.073072 | orchestrator | 2026-03-18 02:29:40 | INFO  | Inventory overwrite handling completed 2026-03-18 02:30:02.073094 | orchestrator | 2026-03-18 02:29:42 | INFO  | Starting merge of inventory files 2026-03-18 02:30:02.073118 | orchestrator | 2026-03-18 02:29:42 | INFO  | Inventory files merged successfully 2026-03-18 02:30:02.073138 | orchestrator | 2026-03-18 02:29:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-18 02:30:02.073157 | orchestrator | 2026-03-18 02:30:00 | INFO  | Successfully wrote ClusterShell configuration 2026-03-18 02:30:02.073180 | orchestrator | [master 9daf32a] 2026-03-18-02-30 2026-03-18 02:30:02.073203 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-18 02:30:04.638142 | orchestrator | 2026-03-18 02:30:04 | INFO  | Task 67cafa8f-8fdb-4215-a44f-5c39239c4ff6 (ceph-create-lvm-devices) was prepared for execution. 2026-03-18 02:30:04.638269 | orchestrator | 2026-03-18 02:30:04 | INFO  | It takes a moment until task 67cafa8f-8fdb-4215-a44f-5c39239c4ff6 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-18 02:30:17.579923 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 02:30:17.580012 | orchestrator | 2.16.14 2026-03-18 02:30:17.580023 | orchestrator | 2026-03-18 02:30:17.580031 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-18 02:30:17.580039 | orchestrator | 2026-03-18 02:30:17.580046 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:30:17.580053 | orchestrator | Wednesday 18 March 2026 02:30:09 +0000 (0:00:00.340) 0:00:00.340 ******* 2026-03-18 02:30:17.580061 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 02:30:17.580068 | orchestrator | 2026-03-18 02:30:17.580075 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:30:17.580082 | orchestrator | Wednesday 18 March 2026 02:30:09 +0000 (0:00:00.272) 0:00:00.613 ******* 2026-03-18 02:30:17.580089 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:17.580095 | orchestrator | 2026-03-18 02:30:17.580102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580109 | orchestrator | Wednesday 18 March 2026 02:30:09 +0000 (0:00:00.248) 0:00:00.862 ******* 2026-03-18 02:30:17.580116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-18 02:30:17.580122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-18 02:30:17.580129 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-18 02:30:17.580147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-18 02:30:17.580155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-18 02:30:17.580161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-18 02:30:17.580168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-18 02:30:17.580175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-18 02:30:17.580181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-18 02:30:17.580188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-18 02:30:17.580194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-18 02:30:17.580220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-18 02:30:17.580227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-18 02:30:17.580234 | orchestrator | 2026-03-18 02:30:17.580240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580247 | orchestrator | Wednesday 18 March 2026 02:30:10 +0000 (0:00:00.573) 0:00:01.436 ******* 2026-03-18 02:30:17.580253 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580260 | orchestrator | 2026-03-18 02:30:17.580266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580273 | orchestrator | Wednesday 18 March 2026 02:30:10 +0000 (0:00:00.212) 0:00:01.648 ******* 2026-03-18 02:30:17.580279 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580286 | orchestrator | 2026-03-18 02:30:17.580292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580299 | orchestrator | Wednesday 18 March 2026 02:30:11 +0000 (0:00:00.244) 0:00:01.893 ******* 2026-03-18 02:30:17.580305 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580312 | orchestrator | 2026-03-18 02:30:17.580318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580325 | orchestrator | Wednesday 18 March 2026 02:30:11 +0000 (0:00:00.201) 0:00:02.094 ******* 2026-03-18 02:30:17.580331 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580338 | orchestrator | 2026-03-18 02:30:17.580344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580351 | orchestrator | Wednesday 18 March 2026 02:30:11 +0000 (0:00:00.217) 0:00:02.311 ******* 2026-03-18 02:30:17.580357 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580364 | orchestrator | 2026-03-18 02:30:17.580370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580398 | orchestrator | Wednesday 18 March 2026 02:30:11 +0000 (0:00:00.219) 0:00:02.531 ******* 2026-03-18 02:30:17.580406 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580413 | orchestrator | 2026-03-18 02:30:17.580419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580426 | orchestrator | Wednesday 18 March 2026 02:30:11 +0000 (0:00:00.207) 0:00:02.739 ******* 2026-03-18 02:30:17.580432 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580439 | orchestrator | 2026-03-18 02:30:17.580445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580462 | orchestrator | Wednesday 18 March 2026 02:30:12 +0000 (0:00:00.235) 0:00:02.974 ******* 2026-03-18 02:30:17.580468 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580475 | orchestrator | 2026-03-18 02:30:17.580483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580491 | orchestrator | Wednesday 18 March 2026 02:30:12 +0000 (0:00:00.228) 0:00:03.203 ******* 2026-03-18 02:30:17.580499 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561) 2026-03-18 02:30:17.580515 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561) 2026-03-18 02:30:17.580522 | orchestrator | 2026-03-18 02:30:17.580530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580550 | orchestrator | Wednesday 18 March 2026 02:30:12 +0000 (0:00:00.458) 0:00:03.661 ******* 2026-03-18 02:30:17.580558 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768) 2026-03-18 02:30:17.580566 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768) 2026-03-18 02:30:17.580574 | orchestrator | 2026-03-18 02:30:17.580582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580590 | orchestrator | Wednesday 18 March 2026 02:30:13 +0000 (0:00:00.746) 0:00:04.408 ******* 2026-03-18 02:30:17.580603 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e) 2026-03-18 02:30:17.580611 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e) 2026-03-18 02:30:17.580619 | orchestrator | 2026-03-18 02:30:17.580626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580634 | orchestrator | Wednesday 18 March 2026 02:30:14 +0000 (0:00:00.691) 0:00:05.100 ******* 2026-03-18 02:30:17.580641 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa) 2026-03-18 02:30:17.580649 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa) 2026-03-18 02:30:17.580657 | orchestrator | 2026-03-18 02:30:17.580664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:17.580676 | orchestrator | Wednesday 18 March 2026 02:30:15 +0000 (0:00:00.955) 0:00:06.055 ******* 2026-03-18 02:30:17.580684 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:30:17.580692 | orchestrator | 2026-03-18 02:30:17.580699 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580707 | orchestrator | Wednesday 18 March 2026 02:30:15 +0000 (0:00:00.377) 0:00:06.433 ******* 2026-03-18 02:30:17.580714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-18 02:30:17.580722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-18 02:30:17.580729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-18 02:30:17.580737 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-18 02:30:17.580744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-18 02:30:17.580751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-18 02:30:17.580759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-18 02:30:17.580766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-18 02:30:17.580773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-18 02:30:17.580780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-18 02:30:17.580786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-18 02:30:17.580793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-18 02:30:17.580799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-18 02:30:17.580806 | orchestrator | 2026-03-18 02:30:17.580812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580819 | orchestrator | Wednesday 18 March 2026 02:30:16 +0000 (0:00:00.456) 0:00:06.890 ******* 2026-03-18 02:30:17.580825 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580831 | orchestrator | 2026-03-18 02:30:17.580838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580844 | orchestrator | Wednesday 18 March 2026 02:30:16 +0000 (0:00:00.209) 0:00:07.100 ******* 2026-03-18 02:30:17.580851 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580858 | orchestrator | 2026-03-18 02:30:17.580864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580871 | orchestrator | Wednesday 18 March 2026 02:30:16 +0000 (0:00:00.220) 0:00:07.320 ******* 2026-03-18 02:30:17.580877 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580884 | orchestrator | 2026-03-18 02:30:17.580890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580902 | orchestrator | Wednesday 18 March 2026 02:30:16 +0000 (0:00:00.217) 0:00:07.537 ******* 2026-03-18 02:30:17.580909 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580915 | orchestrator | 2026-03-18 02:30:17.580922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580928 | orchestrator | Wednesday 18 March 2026 02:30:16 +0000 (0:00:00.206) 0:00:07.743 ******* 2026-03-18 02:30:17.580935 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580941 | orchestrator | 2026-03-18 02:30:17.580948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580954 | orchestrator | Wednesday 18 March 2026 02:30:17 +0000 (0:00:00.259) 0:00:08.003 ******* 2026-03-18 02:30:17.580961 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580967 | orchestrator | 2026-03-18 02:30:17.580974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:17.580980 | orchestrator | Wednesday 18 March 2026 02:30:17 +0000 (0:00:00.218) 0:00:08.222 ******* 2026-03-18 02:30:17.580987 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:17.580993 | orchestrator | 2026-03-18 02:30:17.581004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213514 | orchestrator | Wednesday 18 March 2026 02:30:17 +0000 (0:00:00.212) 0:00:08.434 ******* 2026-03-18 02:30:26.213649 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.213667 | orchestrator | 2026-03-18 02:30:26.213680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213692 | orchestrator | Wednesday 18 March 2026 02:30:18 +0000 (0:00:00.729) 0:00:09.164 ******* 2026-03-18 02:30:26.213703 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-18 02:30:26.213715 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-18 02:30:26.213727 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-18 02:30:26.213738 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-18 02:30:26.213749 | orchestrator | 2026-03-18 02:30:26.213760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213771 | orchestrator | Wednesday 18 March 2026 02:30:19 +0000 (0:00:00.724) 0:00:09.888 ******* 2026-03-18 02:30:26.213782 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.213793 | orchestrator | 2026-03-18 02:30:26.213804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213815 | orchestrator | Wednesday 18 March 2026 02:30:19 +0000 (0:00:00.210) 0:00:10.099 ******* 2026-03-18 02:30:26.213829 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.213849 | orchestrator | 2026-03-18 02:30:26.213867 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213887 | orchestrator | Wednesday 18 March 2026 02:30:19 +0000 (0:00:00.204) 0:00:10.304 ******* 2026-03-18 02:30:26.213928 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.213948 | orchestrator | 2026-03-18 02:30:26.213967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:26.213985 | orchestrator | Wednesday 18 March 2026 02:30:19 +0000 (0:00:00.229) 0:00:10.534 ******* 2026-03-18 02:30:26.214005 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214123 | orchestrator | 2026-03-18 02:30:26.214145 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-18 02:30:26.214166 | orchestrator | Wednesday 18 March 2026 02:30:19 +0000 (0:00:00.201) 0:00:10.735 ******* 2026-03-18 02:30:26.214187 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214207 | orchestrator | 2026-03-18 02:30:26.214227 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-18 02:30:26.214246 | orchestrator | Wednesday 18 March 2026 02:30:20 +0000 (0:00:00.149) 0:00:10.884 ******* 2026-03-18 02:30:26.214268 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'dcb28020-3d32-5af4-a4b7-0acc667eefcb'}}) 2026-03-18 02:30:26.214289 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9a3797da-ebdd-566a-aa35-3713ec7e039a'}}) 2026-03-18 02:30:26.214342 | orchestrator | 2026-03-18 02:30:26.214363 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-18 02:30:26.214402 | orchestrator | Wednesday 18 March 2026 02:30:20 +0000 (0:00:00.210) 0:00:11.095 ******* 2026-03-18 02:30:26.214416 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}) 2026-03-18 02:30:26.214429 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}) 2026-03-18 02:30:26.214440 | orchestrator | 2026-03-18 02:30:26.214450 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-18 02:30:26.214461 | orchestrator | Wednesday 18 March 2026 02:30:22 +0000 (0:00:02.077) 0:00:13.172 ******* 2026-03-18 02:30:26.214472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.214484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.214495 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214506 | orchestrator | 2026-03-18 02:30:26.214517 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-18 02:30:26.214527 | orchestrator | Wednesday 18 March 2026 02:30:22 +0000 (0:00:00.164) 0:00:13.336 ******* 2026-03-18 02:30:26.214538 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}) 2026-03-18 02:30:26.214549 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}) 2026-03-18 02:30:26.214560 | orchestrator | 2026-03-18 02:30:26.214571 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-18 02:30:26.214582 | orchestrator | Wednesday 18 March 2026 02:30:23 +0000 (0:00:01.490) 0:00:14.827 ******* 2026-03-18 02:30:26.214592 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.214603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.214614 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214625 | orchestrator | 2026-03-18 02:30:26.214635 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-18 02:30:26.214649 | orchestrator | Wednesday 18 March 2026 02:30:24 +0000 (0:00:00.182) 0:00:15.010 ******* 2026-03-18 02:30:26.214693 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214712 | orchestrator | 2026-03-18 02:30:26.214731 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-18 02:30:26.214749 | orchestrator | Wednesday 18 March 2026 02:30:24 +0000 (0:00:00.395) 0:00:15.405 ******* 2026-03-18 02:30:26.214768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.214788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.214806 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214825 | orchestrator | 2026-03-18 02:30:26.214845 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-18 02:30:26.214863 | orchestrator | Wednesday 18 March 2026 02:30:24 +0000 (0:00:00.186) 0:00:15.592 ******* 2026-03-18 02:30:26.214881 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214892 | orchestrator | 2026-03-18 02:30:26.214914 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-18 02:30:26.214925 | orchestrator | Wednesday 18 March 2026 02:30:24 +0000 (0:00:00.179) 0:00:15.772 ******* 2026-03-18 02:30:26.214944 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.214955 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.214966 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.214977 | orchestrator | 2026-03-18 02:30:26.214988 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-18 02:30:26.214998 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.159) 0:00:15.931 ******* 2026-03-18 02:30:26.215009 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215020 | orchestrator | 2026-03-18 02:30:26.215030 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-18 02:30:26.215041 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.155) 0:00:16.087 ******* 2026-03-18 02:30:26.215052 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.215062 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.215073 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215084 | orchestrator | 2026-03-18 02:30:26.215094 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-18 02:30:26.215105 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.173) 0:00:16.260 ******* 2026-03-18 02:30:26.215116 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:26.215127 | orchestrator | 2026-03-18 02:30:26.215138 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-18 02:30:26.215149 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.163) 0:00:16.424 ******* 2026-03-18 02:30:26.215160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.215171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.215182 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215192 | orchestrator | 2026-03-18 02:30:26.215203 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-18 02:30:26.215217 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.158) 0:00:16.582 ******* 2026-03-18 02:30:26.215234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.215249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.215267 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215284 | orchestrator | 2026-03-18 02:30:26.215301 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-18 02:30:26.215319 | orchestrator | Wednesday 18 March 2026 02:30:25 +0000 (0:00:00.169) 0:00:16.752 ******* 2026-03-18 02:30:26.215337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:26.215354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:26.215400 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215431 | orchestrator | 2026-03-18 02:30:26.215449 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-18 02:30:26.215467 | orchestrator | Wednesday 18 March 2026 02:30:26 +0000 (0:00:00.162) 0:00:16.914 ******* 2026-03-18 02:30:26.215504 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:26.215523 | orchestrator | 2026-03-18 02:30:26.215557 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-18 02:30:26.215592 | orchestrator | Wednesday 18 March 2026 02:30:26 +0000 (0:00:00.154) 0:00:17.069 ******* 2026-03-18 02:30:33.240700 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.240803 | orchestrator | 2026-03-18 02:30:33.240821 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-18 02:30:33.240834 | orchestrator | Wednesday 18 March 2026 02:30:26 +0000 (0:00:00.145) 0:00:17.214 ******* 2026-03-18 02:30:33.240845 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.240856 | orchestrator | 2026-03-18 02:30:33.240868 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-18 02:30:33.240881 | orchestrator | Wednesday 18 March 2026 02:30:26 +0000 (0:00:00.376) 0:00:17.591 ******* 2026-03-18 02:30:33.240902 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:30:33.240927 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-18 02:30:33.240955 | orchestrator | } 2026-03-18 02:30:33.240975 | orchestrator | 2026-03-18 02:30:33.240994 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-18 02:30:33.241011 | orchestrator | Wednesday 18 March 2026 02:30:26 +0000 (0:00:00.166) 0:00:17.758 ******* 2026-03-18 02:30:33.241028 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:30:33.241047 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-18 02:30:33.241066 | orchestrator | } 2026-03-18 02:30:33.241085 | orchestrator | 2026-03-18 02:30:33.241103 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-18 02:30:33.241122 | orchestrator | Wednesday 18 March 2026 02:30:27 +0000 (0:00:00.160) 0:00:17.918 ******* 2026-03-18 02:30:33.241141 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:30:33.241183 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-18 02:30:33.241203 | orchestrator | } 2026-03-18 02:30:33.241222 | orchestrator | 2026-03-18 02:30:33.241241 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-18 02:30:33.241261 | orchestrator | Wednesday 18 March 2026 02:30:27 +0000 (0:00:00.171) 0:00:18.090 ******* 2026-03-18 02:30:33.241280 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:33.241300 | orchestrator | 2026-03-18 02:30:33.241320 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-18 02:30:33.241341 | orchestrator | Wednesday 18 March 2026 02:30:27 +0000 (0:00:00.682) 0:00:18.772 ******* 2026-03-18 02:30:33.241358 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:33.241488 | orchestrator | 2026-03-18 02:30:33.241507 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-18 02:30:33.241523 | orchestrator | Wednesday 18 March 2026 02:30:28 +0000 (0:00:00.530) 0:00:19.302 ******* 2026-03-18 02:30:33.241541 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:33.241559 | orchestrator | 2026-03-18 02:30:33.241578 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-18 02:30:33.241597 | orchestrator | Wednesday 18 March 2026 02:30:28 +0000 (0:00:00.539) 0:00:19.842 ******* 2026-03-18 02:30:33.241615 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:33.241632 | orchestrator | 2026-03-18 02:30:33.241649 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-18 02:30:33.241667 | orchestrator | Wednesday 18 March 2026 02:30:29 +0000 (0:00:00.159) 0:00:20.001 ******* 2026-03-18 02:30:33.241685 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.241703 | orchestrator | 2026-03-18 02:30:33.241721 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-18 02:30:33.241739 | orchestrator | Wednesday 18 March 2026 02:30:29 +0000 (0:00:00.116) 0:00:20.118 ******* 2026-03-18 02:30:33.241756 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.241807 | orchestrator | 2026-03-18 02:30:33.241826 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-18 02:30:33.241844 | orchestrator | Wednesday 18 March 2026 02:30:29 +0000 (0:00:00.134) 0:00:20.253 ******* 2026-03-18 02:30:33.241863 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:30:33.241881 | orchestrator |  "vgs_report": { 2026-03-18 02:30:33.241899 | orchestrator |  "vg": [] 2026-03-18 02:30:33.241964 | orchestrator |  } 2026-03-18 02:30:33.241985 | orchestrator | } 2026-03-18 02:30:33.242003 | orchestrator | 2026-03-18 02:30:33.242248 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-18 02:30:33.242271 | orchestrator | Wednesday 18 March 2026 02:30:29 +0000 (0:00:00.149) 0:00:20.402 ******* 2026-03-18 02:30:33.242290 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242307 | orchestrator | 2026-03-18 02:30:33.242326 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-18 02:30:33.242346 | orchestrator | Wednesday 18 March 2026 02:30:29 +0000 (0:00:00.152) 0:00:20.554 ******* 2026-03-18 02:30:33.242394 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242416 | orchestrator | 2026-03-18 02:30:33.242435 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-18 02:30:33.242449 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.384) 0:00:20.939 ******* 2026-03-18 02:30:33.242460 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242470 | orchestrator | 2026-03-18 02:30:33.242481 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-18 02:30:33.242492 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.163) 0:00:21.102 ******* 2026-03-18 02:30:33.242503 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242513 | orchestrator | 2026-03-18 02:30:33.242524 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-18 02:30:33.242535 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.165) 0:00:21.268 ******* 2026-03-18 02:30:33.242546 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242556 | orchestrator | 2026-03-18 02:30:33.242567 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-18 02:30:33.242578 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.147) 0:00:21.415 ******* 2026-03-18 02:30:33.242589 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242599 | orchestrator | 2026-03-18 02:30:33.242610 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-18 02:30:33.242621 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.145) 0:00:21.560 ******* 2026-03-18 02:30:33.242631 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242642 | orchestrator | 2026-03-18 02:30:33.242674 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-18 02:30:33.242686 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.158) 0:00:21.719 ******* 2026-03-18 02:30:33.242722 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242734 | orchestrator | 2026-03-18 02:30:33.242744 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-18 02:30:33.242755 | orchestrator | Wednesday 18 March 2026 02:30:30 +0000 (0:00:00.150) 0:00:21.869 ******* 2026-03-18 02:30:33.242767 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242778 | orchestrator | 2026-03-18 02:30:33.242789 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-18 02:30:33.242799 | orchestrator | Wednesday 18 March 2026 02:30:31 +0000 (0:00:00.142) 0:00:22.011 ******* 2026-03-18 02:30:33.242819 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242838 | orchestrator | 2026-03-18 02:30:33.242858 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-18 02:30:33.242877 | orchestrator | Wednesday 18 March 2026 02:30:31 +0000 (0:00:00.159) 0:00:22.171 ******* 2026-03-18 02:30:33.242893 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.242910 | orchestrator | 2026-03-18 02:30:33.242946 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-18 02:30:33.242965 | orchestrator | Wednesday 18 March 2026 02:30:31 +0000 (0:00:00.178) 0:00:22.349 ******* 2026-03-18 02:30:33.242983 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243001 | orchestrator | 2026-03-18 02:30:33.243020 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-18 02:30:33.243040 | orchestrator | Wednesday 18 March 2026 02:30:31 +0000 (0:00:00.151) 0:00:22.501 ******* 2026-03-18 02:30:33.243072 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243091 | orchestrator | 2026-03-18 02:30:33.243109 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-18 02:30:33.243129 | orchestrator | Wednesday 18 March 2026 02:30:31 +0000 (0:00:00.150) 0:00:22.652 ******* 2026-03-18 02:30:33.243149 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243169 | orchestrator | 2026-03-18 02:30:33.243189 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-18 02:30:33.243209 | orchestrator | Wednesday 18 March 2026 02:30:32 +0000 (0:00:00.398) 0:00:23.050 ******* 2026-03-18 02:30:33.243230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:33.243252 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:33.243273 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243292 | orchestrator | 2026-03-18 02:30:33.243311 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-18 02:30:33.243330 | orchestrator | Wednesday 18 March 2026 02:30:32 +0000 (0:00:00.158) 0:00:23.209 ******* 2026-03-18 02:30:33.243349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:33.243394 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:33.243413 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243430 | orchestrator | 2026-03-18 02:30:33.243446 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-18 02:30:33.243464 | orchestrator | Wednesday 18 March 2026 02:30:32 +0000 (0:00:00.169) 0:00:23.378 ******* 2026-03-18 02:30:33.243481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:33.243499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:33.243517 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243536 | orchestrator | 2026-03-18 02:30:33.243553 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-18 02:30:33.243571 | orchestrator | Wednesday 18 March 2026 02:30:32 +0000 (0:00:00.179) 0:00:23.558 ******* 2026-03-18 02:30:33.243589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:33.243608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:33.243627 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243645 | orchestrator | 2026-03-18 02:30:33.243664 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-18 02:30:33.243677 | orchestrator | Wednesday 18 March 2026 02:30:32 +0000 (0:00:00.177) 0:00:23.735 ******* 2026-03-18 02:30:33.243688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:33.243712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:33.243723 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:33.243734 | orchestrator | 2026-03-18 02:30:33.243745 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-18 02:30:33.243756 | orchestrator | Wednesday 18 March 2026 02:30:33 +0000 (0:00:00.181) 0:00:23.916 ******* 2026-03-18 02:30:33.243782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009246 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009261 | orchestrator | 2026-03-18 02:30:39.009293 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-18 02:30:39.009315 | orchestrator | Wednesday 18 March 2026 02:30:33 +0000 (0:00:00.183) 0:00:24.100 ******* 2026-03-18 02:30:39.009326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009346 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009355 | orchestrator | 2026-03-18 02:30:39.009409 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-18 02:30:39.009420 | orchestrator | Wednesday 18 March 2026 02:30:33 +0000 (0:00:00.166) 0:00:24.266 ******* 2026-03-18 02:30:39.009446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009466 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009481 | orchestrator | 2026-03-18 02:30:39.009498 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-18 02:30:39.009515 | orchestrator | Wednesday 18 March 2026 02:30:33 +0000 (0:00:00.160) 0:00:24.426 ******* 2026-03-18 02:30:39.009539 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:39.009559 | orchestrator | 2026-03-18 02:30:39.009576 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-18 02:30:39.009593 | orchestrator | Wednesday 18 March 2026 02:30:34 +0000 (0:00:00.530) 0:00:24.956 ******* 2026-03-18 02:30:39.009610 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:39.009626 | orchestrator | 2026-03-18 02:30:39.009641 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-18 02:30:39.009658 | orchestrator | Wednesday 18 March 2026 02:30:34 +0000 (0:00:00.557) 0:00:25.514 ******* 2026-03-18 02:30:39.009675 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:30:39.009693 | orchestrator | 2026-03-18 02:30:39.009712 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-18 02:30:39.009728 | orchestrator | Wednesday 18 March 2026 02:30:34 +0000 (0:00:00.158) 0:00:25.673 ******* 2026-03-18 02:30:39.009749 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'vg_name': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}) 2026-03-18 02:30:39.009771 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'vg_name': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}) 2026-03-18 02:30:39.009785 | orchestrator | 2026-03-18 02:30:39.009796 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-18 02:30:39.009831 | orchestrator | Wednesday 18 March 2026 02:30:34 +0000 (0:00:00.182) 0:00:25.855 ******* 2026-03-18 02:30:39.009843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009866 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009876 | orchestrator | 2026-03-18 02:30:39.009885 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-18 02:30:39.009895 | orchestrator | Wednesday 18 March 2026 02:30:35 +0000 (0:00:00.457) 0:00:26.312 ******* 2026-03-18 02:30:39.009905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009924 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009933 | orchestrator | 2026-03-18 02:30:39.009943 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-18 02:30:39.009952 | orchestrator | Wednesday 18 March 2026 02:30:35 +0000 (0:00:00.182) 0:00:26.495 ******* 2026-03-18 02:30:39.009962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 02:30:39.009972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 02:30:39.009981 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:30:39.009991 | orchestrator | 2026-03-18 02:30:39.010000 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-18 02:30:39.010010 | orchestrator | Wednesday 18 March 2026 02:30:35 +0000 (0:00:00.192) 0:00:26.688 ******* 2026-03-18 02:30:39.010107 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:30:39.010118 | orchestrator |  "lvm_report": { 2026-03-18 02:30:39.010128 | orchestrator |  "lv": [ 2026-03-18 02:30:39.010137 | orchestrator |  { 2026-03-18 02:30:39.010147 | orchestrator |  "lv_name": "osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a", 2026-03-18 02:30:39.010157 | orchestrator |  "vg_name": "ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a" 2026-03-18 02:30:39.010166 | orchestrator |  }, 2026-03-18 02:30:39.010176 | orchestrator |  { 2026-03-18 02:30:39.010185 | orchestrator |  "lv_name": "osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb", 2026-03-18 02:30:39.010195 | orchestrator |  "vg_name": "ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb" 2026-03-18 02:30:39.010204 | orchestrator |  } 2026-03-18 02:30:39.010214 | orchestrator |  ], 2026-03-18 02:30:39.010223 | orchestrator |  "pv": [ 2026-03-18 02:30:39.010233 | orchestrator |  { 2026-03-18 02:30:39.010243 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-18 02:30:39.010253 | orchestrator |  "vg_name": "ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb" 2026-03-18 02:30:39.010264 | orchestrator |  }, 2026-03-18 02:30:39.010275 | orchestrator |  { 2026-03-18 02:30:39.010285 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-18 02:30:39.010304 | orchestrator |  "vg_name": "ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a" 2026-03-18 02:30:39.010315 | orchestrator |  } 2026-03-18 02:30:39.010326 | orchestrator |  ] 2026-03-18 02:30:39.010336 | orchestrator |  } 2026-03-18 02:30:39.010347 | orchestrator | } 2026-03-18 02:30:39.010358 | orchestrator | 2026-03-18 02:30:39.010467 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-18 02:30:39.010502 | orchestrator | 2026-03-18 02:30:39.010521 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:30:39.010541 | orchestrator | Wednesday 18 March 2026 02:30:36 +0000 (0:00:00.328) 0:00:27.016 ******* 2026-03-18 02:30:39.010560 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-18 02:30:39.010580 | orchestrator | 2026-03-18 02:30:39.010600 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:30:39.010619 | orchestrator | Wednesday 18 March 2026 02:30:36 +0000 (0:00:00.273) 0:00:27.290 ******* 2026-03-18 02:30:39.010647 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:39.010669 | orchestrator | 2026-03-18 02:30:39.010689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.010710 | orchestrator | Wednesday 18 March 2026 02:30:36 +0000 (0:00:00.252) 0:00:27.542 ******* 2026-03-18 02:30:39.010730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-18 02:30:39.010745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-18 02:30:39.010755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-18 02:30:39.010766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-18 02:30:39.010777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-18 02:30:39.010788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-18 02:30:39.010799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-18 02:30:39.010809 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-18 02:30:39.010820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-18 02:30:39.010831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-18 02:30:39.010842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-18 02:30:39.010852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-18 02:30:39.010863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-18 02:30:39.010874 | orchestrator | 2026-03-18 02:30:39.010884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.010895 | orchestrator | Wednesday 18 March 2026 02:30:37 +0000 (0:00:00.465) 0:00:28.008 ******* 2026-03-18 02:30:39.010906 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.010917 | orchestrator | 2026-03-18 02:30:39.010928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.010938 | orchestrator | Wednesday 18 March 2026 02:30:37 +0000 (0:00:00.203) 0:00:28.212 ******* 2026-03-18 02:30:39.010949 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.010960 | orchestrator | 2026-03-18 02:30:39.010970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.010981 | orchestrator | Wednesday 18 March 2026 02:30:38 +0000 (0:00:00.706) 0:00:28.918 ******* 2026-03-18 02:30:39.010992 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.011003 | orchestrator | 2026-03-18 02:30:39.011014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.011026 | orchestrator | Wednesday 18 March 2026 02:30:38 +0000 (0:00:00.256) 0:00:29.175 ******* 2026-03-18 02:30:39.011045 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.011062 | orchestrator | 2026-03-18 02:30:39.011081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.011101 | orchestrator | Wednesday 18 March 2026 02:30:38 +0000 (0:00:00.256) 0:00:29.432 ******* 2026-03-18 02:30:39.011118 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.011136 | orchestrator | 2026-03-18 02:30:39.011168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:39.011184 | orchestrator | Wednesday 18 March 2026 02:30:38 +0000 (0:00:00.212) 0:00:29.644 ******* 2026-03-18 02:30:39.011195 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:39.011206 | orchestrator | 2026-03-18 02:30:39.011228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978742 | orchestrator | Wednesday 18 March 2026 02:30:38 +0000 (0:00:00.222) 0:00:29.867 ******* 2026-03-18 02:30:50.978820 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.978827 | orchestrator | 2026-03-18 02:30:50.978832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978837 | orchestrator | Wednesday 18 March 2026 02:30:39 +0000 (0:00:00.218) 0:00:30.085 ******* 2026-03-18 02:30:50.978841 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.978845 | orchestrator | 2026-03-18 02:30:50.978849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978853 | orchestrator | Wednesday 18 March 2026 02:30:39 +0000 (0:00:00.265) 0:00:30.351 ******* 2026-03-18 02:30:50.978857 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d) 2026-03-18 02:30:50.978862 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d) 2026-03-18 02:30:50.978865 | orchestrator | 2026-03-18 02:30:50.978869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978885 | orchestrator | Wednesday 18 March 2026 02:30:39 +0000 (0:00:00.473) 0:00:30.824 ******* 2026-03-18 02:30:50.978889 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc) 2026-03-18 02:30:50.978893 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc) 2026-03-18 02:30:50.978897 | orchestrator | 2026-03-18 02:30:50.978900 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978904 | orchestrator | Wednesday 18 March 2026 02:30:40 +0000 (0:00:00.478) 0:00:31.303 ******* 2026-03-18 02:30:50.978908 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a) 2026-03-18 02:30:50.978912 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a) 2026-03-18 02:30:50.978916 | orchestrator | 2026-03-18 02:30:50.978919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978923 | orchestrator | Wednesday 18 March 2026 02:30:41 +0000 (0:00:00.768) 0:00:32.071 ******* 2026-03-18 02:30:50.978927 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a) 2026-03-18 02:30:50.978931 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a) 2026-03-18 02:30:50.978935 | orchestrator | 2026-03-18 02:30:50.978938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:30:50.978942 | orchestrator | Wednesday 18 March 2026 02:30:42 +0000 (0:00:01.045) 0:00:33.117 ******* 2026-03-18 02:30:50.978946 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:30:50.978950 | orchestrator | 2026-03-18 02:30:50.978954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.978958 | orchestrator | Wednesday 18 March 2026 02:30:42 +0000 (0:00:00.362) 0:00:33.479 ******* 2026-03-18 02:30:50.978961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-18 02:30:50.978966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-18 02:30:50.978970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-18 02:30:50.978973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-18 02:30:50.978992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-18 02:30:50.978996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-18 02:30:50.979000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-18 02:30:50.979003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-18 02:30:50.979007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-18 02:30:50.979011 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-18 02:30:50.979015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-18 02:30:50.979018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-18 02:30:50.979022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-18 02:30:50.979026 | orchestrator | 2026-03-18 02:30:50.979030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979034 | orchestrator | Wednesday 18 March 2026 02:30:43 +0000 (0:00:00.498) 0:00:33.978 ******* 2026-03-18 02:30:50.979037 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979041 | orchestrator | 2026-03-18 02:30:50.979045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979049 | orchestrator | Wednesday 18 March 2026 02:30:43 +0000 (0:00:00.224) 0:00:34.203 ******* 2026-03-18 02:30:50.979052 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979056 | orchestrator | 2026-03-18 02:30:50.979060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979064 | orchestrator | Wednesday 18 March 2026 02:30:43 +0000 (0:00:00.229) 0:00:34.433 ******* 2026-03-18 02:30:50.979067 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979071 | orchestrator | 2026-03-18 02:30:50.979086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979090 | orchestrator | Wednesday 18 March 2026 02:30:43 +0000 (0:00:00.235) 0:00:34.669 ******* 2026-03-18 02:30:50.979094 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979098 | orchestrator | 2026-03-18 02:30:50.979101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979105 | orchestrator | Wednesday 18 March 2026 02:30:44 +0000 (0:00:00.228) 0:00:34.897 ******* 2026-03-18 02:30:50.979109 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979112 | orchestrator | 2026-03-18 02:30:50.979116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979120 | orchestrator | Wednesday 18 March 2026 02:30:44 +0000 (0:00:00.218) 0:00:35.116 ******* 2026-03-18 02:30:50.979124 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979128 | orchestrator | 2026-03-18 02:30:50.979132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979136 | orchestrator | Wednesday 18 March 2026 02:30:44 +0000 (0:00:00.249) 0:00:35.365 ******* 2026-03-18 02:30:50.979140 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979143 | orchestrator | 2026-03-18 02:30:50.979150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979154 | orchestrator | Wednesday 18 March 2026 02:30:44 +0000 (0:00:00.217) 0:00:35.583 ******* 2026-03-18 02:30:50.979158 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979161 | orchestrator | 2026-03-18 02:30:50.979165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979169 | orchestrator | Wednesday 18 March 2026 02:30:45 +0000 (0:00:00.712) 0:00:36.295 ******* 2026-03-18 02:30:50.979173 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-18 02:30:50.979177 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-18 02:30:50.979181 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-18 02:30:50.979189 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-18 02:30:50.979193 | orchestrator | 2026-03-18 02:30:50.979197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979201 | orchestrator | Wednesday 18 March 2026 02:30:46 +0000 (0:00:00.702) 0:00:36.998 ******* 2026-03-18 02:30:50.979205 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979208 | orchestrator | 2026-03-18 02:30:50.979212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979216 | orchestrator | Wednesday 18 March 2026 02:30:46 +0000 (0:00:00.236) 0:00:37.234 ******* 2026-03-18 02:30:50.979220 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979224 | orchestrator | 2026-03-18 02:30:50.979227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979231 | orchestrator | Wednesday 18 March 2026 02:30:46 +0000 (0:00:00.229) 0:00:37.464 ******* 2026-03-18 02:30:50.979235 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979239 | orchestrator | 2026-03-18 02:30:50.979243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:30:50.979246 | orchestrator | Wednesday 18 March 2026 02:30:46 +0000 (0:00:00.240) 0:00:37.704 ******* 2026-03-18 02:30:50.979250 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979254 | orchestrator | 2026-03-18 02:30:50.979258 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-18 02:30:50.979261 | orchestrator | Wednesday 18 March 2026 02:30:47 +0000 (0:00:00.253) 0:00:37.957 ******* 2026-03-18 02:30:50.979265 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979269 | orchestrator | 2026-03-18 02:30:50.979273 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-18 02:30:50.979276 | orchestrator | Wednesday 18 March 2026 02:30:47 +0000 (0:00:00.154) 0:00:38.112 ******* 2026-03-18 02:30:50.979280 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd0e002fd-9a73-564c-a03c-ee3a79d477af'}}) 2026-03-18 02:30:50.979285 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ab16e1e8-130f-595d-96ba-aeefaeb1133d'}}) 2026-03-18 02:30:50.979288 | orchestrator | 2026-03-18 02:30:50.979292 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-18 02:30:50.979296 | orchestrator | Wednesday 18 March 2026 02:30:47 +0000 (0:00:00.215) 0:00:38.327 ******* 2026-03-18 02:30:50.979301 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}) 2026-03-18 02:30:50.979306 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}) 2026-03-18 02:30:50.979311 | orchestrator | 2026-03-18 02:30:50.979315 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-18 02:30:50.979319 | orchestrator | Wednesday 18 March 2026 02:30:49 +0000 (0:00:01.916) 0:00:40.244 ******* 2026-03-18 02:30:50.979324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:50.979329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:50.979333 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:50.979338 | orchestrator | 2026-03-18 02:30:50.979342 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-18 02:30:50.979346 | orchestrator | Wednesday 18 March 2026 02:30:49 +0000 (0:00:00.160) 0:00:40.405 ******* 2026-03-18 02:30:50.979372 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}) 2026-03-18 02:30:50.979380 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}) 2026-03-18 02:30:57.433017 | orchestrator | 2026-03-18 02:30:57.433145 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-18 02:30:57.433171 | orchestrator | Wednesday 18 March 2026 02:30:50 +0000 (0:00:01.431) 0:00:41.836 ******* 2026-03-18 02:30:57.433191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.433210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.433229 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433249 | orchestrator | 2026-03-18 02:30:57.433267 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-18 02:30:57.433285 | orchestrator | Wednesday 18 March 2026 02:30:51 +0000 (0:00:00.420) 0:00:42.256 ******* 2026-03-18 02:30:57.433323 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433342 | orchestrator | 2026-03-18 02:30:57.433423 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-18 02:30:57.433443 | orchestrator | Wednesday 18 March 2026 02:30:51 +0000 (0:00:00.146) 0:00:42.403 ******* 2026-03-18 02:30:57.433462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.433479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.433496 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433514 | orchestrator | 2026-03-18 02:30:57.433533 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-18 02:30:57.433553 | orchestrator | Wednesday 18 March 2026 02:30:51 +0000 (0:00:00.174) 0:00:42.577 ******* 2026-03-18 02:30:57.433572 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433592 | orchestrator | 2026-03-18 02:30:57.433609 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-18 02:30:57.433628 | orchestrator | Wednesday 18 March 2026 02:30:51 +0000 (0:00:00.155) 0:00:42.733 ******* 2026-03-18 02:30:57.433647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.433665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.433685 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433708 | orchestrator | 2026-03-18 02:30:57.433729 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-18 02:30:57.433749 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.166) 0:00:42.899 ******* 2026-03-18 02:30:57.433771 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433789 | orchestrator | 2026-03-18 02:30:57.433806 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-18 02:30:57.433822 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.158) 0:00:43.058 ******* 2026-03-18 02:30:57.433839 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.433855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.433870 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.433886 | orchestrator | 2026-03-18 02:30:57.433902 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-18 02:30:57.433918 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.171) 0:00:43.230 ******* 2026-03-18 02:30:57.433968 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:57.433987 | orchestrator | 2026-03-18 02:30:57.434003 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-18 02:30:57.434107 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.147) 0:00:43.378 ******* 2026-03-18 02:30:57.434131 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.434149 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.434166 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434182 | orchestrator | 2026-03-18 02:30:57.434200 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-18 02:30:57.434215 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.247) 0:00:43.625 ******* 2026-03-18 02:30:57.434230 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.434247 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.434264 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434280 | orchestrator | 2026-03-18 02:30:57.434295 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-18 02:30:57.434338 | orchestrator | Wednesday 18 March 2026 02:30:52 +0000 (0:00:00.161) 0:00:43.787 ******* 2026-03-18 02:30:57.434402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:30:57.434418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:30:57.434434 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434450 | orchestrator | 2026-03-18 02:30:57.434466 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-18 02:30:57.434483 | orchestrator | Wednesday 18 March 2026 02:30:53 +0000 (0:00:00.167) 0:00:43.954 ******* 2026-03-18 02:30:57.434499 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434515 | orchestrator | 2026-03-18 02:30:57.434530 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-18 02:30:57.434556 | orchestrator | Wednesday 18 March 2026 02:30:53 +0000 (0:00:00.414) 0:00:44.369 ******* 2026-03-18 02:30:57.434572 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434589 | orchestrator | 2026-03-18 02:30:57.434605 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-18 02:30:57.434622 | orchestrator | Wednesday 18 March 2026 02:30:53 +0000 (0:00:00.154) 0:00:44.523 ******* 2026-03-18 02:30:57.434638 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.434653 | orchestrator | 2026-03-18 02:30:57.434670 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-18 02:30:57.434686 | orchestrator | Wednesday 18 March 2026 02:30:53 +0000 (0:00:00.167) 0:00:44.691 ******* 2026-03-18 02:30:57.434702 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:30:57.434718 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-18 02:30:57.434735 | orchestrator | } 2026-03-18 02:30:57.434753 | orchestrator | 2026-03-18 02:30:57.434770 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-18 02:30:57.434787 | orchestrator | Wednesday 18 March 2026 02:30:53 +0000 (0:00:00.167) 0:00:44.858 ******* 2026-03-18 02:30:57.434804 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:30:57.434821 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-18 02:30:57.434838 | orchestrator | } 2026-03-18 02:30:57.434855 | orchestrator | 2026-03-18 02:30:57.434871 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-18 02:30:57.434902 | orchestrator | Wednesday 18 March 2026 02:30:54 +0000 (0:00:00.164) 0:00:45.022 ******* 2026-03-18 02:30:57.434919 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:30:57.434935 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-18 02:30:57.434951 | orchestrator | } 2026-03-18 02:30:57.434968 | orchestrator | 2026-03-18 02:30:57.434985 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-18 02:30:57.435001 | orchestrator | Wednesday 18 March 2026 02:30:54 +0000 (0:00:00.157) 0:00:45.180 ******* 2026-03-18 02:30:57.435016 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:57.435033 | orchestrator | 2026-03-18 02:30:57.435049 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-18 02:30:57.435065 | orchestrator | Wednesday 18 March 2026 02:30:54 +0000 (0:00:00.546) 0:00:45.726 ******* 2026-03-18 02:30:57.435082 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:57.435099 | orchestrator | 2026-03-18 02:30:57.435116 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-18 02:30:57.435132 | orchestrator | Wednesday 18 March 2026 02:30:55 +0000 (0:00:00.531) 0:00:46.258 ******* 2026-03-18 02:30:57.435150 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:57.435165 | orchestrator | 2026-03-18 02:30:57.435181 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-18 02:30:57.435196 | orchestrator | Wednesday 18 March 2026 02:30:55 +0000 (0:00:00.558) 0:00:46.817 ******* 2026-03-18 02:30:57.435213 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:30:57.435230 | orchestrator | 2026-03-18 02:30:57.435247 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-18 02:30:57.435264 | orchestrator | Wednesday 18 March 2026 02:30:56 +0000 (0:00:00.167) 0:00:46.985 ******* 2026-03-18 02:30:57.435281 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435295 | orchestrator | 2026-03-18 02:30:57.435311 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-18 02:30:57.435327 | orchestrator | Wednesday 18 March 2026 02:30:56 +0000 (0:00:00.146) 0:00:47.131 ******* 2026-03-18 02:30:57.435343 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435425 | orchestrator | 2026-03-18 02:30:57.435444 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-18 02:30:57.435461 | orchestrator | Wednesday 18 March 2026 02:30:56 +0000 (0:00:00.364) 0:00:47.495 ******* 2026-03-18 02:30:57.435478 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:30:57.435495 | orchestrator |  "vgs_report": { 2026-03-18 02:30:57.435513 | orchestrator |  "vg": [] 2026-03-18 02:30:57.435532 | orchestrator |  } 2026-03-18 02:30:57.435549 | orchestrator | } 2026-03-18 02:30:57.435565 | orchestrator | 2026-03-18 02:30:57.435582 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-18 02:30:57.435600 | orchestrator | Wednesday 18 March 2026 02:30:56 +0000 (0:00:00.151) 0:00:47.647 ******* 2026-03-18 02:30:57.435618 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435636 | orchestrator | 2026-03-18 02:30:57.435684 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-18 02:30:57.435703 | orchestrator | Wednesday 18 March 2026 02:30:56 +0000 (0:00:00.154) 0:00:47.802 ******* 2026-03-18 02:30:57.435720 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435738 | orchestrator | 2026-03-18 02:30:57.435754 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-18 02:30:57.435771 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.174) 0:00:47.976 ******* 2026-03-18 02:30:57.435787 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435802 | orchestrator | 2026-03-18 02:30:57.435817 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-18 02:30:57.435833 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.168) 0:00:48.145 ******* 2026-03-18 02:30:57.435850 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:30:57.435868 | orchestrator | 2026-03-18 02:30:57.435924 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-18 02:31:02.706474 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.146) 0:00:48.291 ******* 2026-03-18 02:31:02.706573 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706588 | orchestrator | 2026-03-18 02:31:02.706601 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-18 02:31:02.706611 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.145) 0:00:48.437 ******* 2026-03-18 02:31:02.706622 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706632 | orchestrator | 2026-03-18 02:31:02.706642 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-18 02:31:02.706652 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.158) 0:00:48.595 ******* 2026-03-18 02:31:02.706662 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706672 | orchestrator | 2026-03-18 02:31:02.706683 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-18 02:31:02.706709 | orchestrator | Wednesday 18 March 2026 02:30:57 +0000 (0:00:00.158) 0:00:48.754 ******* 2026-03-18 02:31:02.706719 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706729 | orchestrator | 2026-03-18 02:31:02.706739 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-18 02:31:02.706749 | orchestrator | Wednesday 18 March 2026 02:30:58 +0000 (0:00:00.151) 0:00:48.906 ******* 2026-03-18 02:31:02.706759 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706769 | orchestrator | 2026-03-18 02:31:02.706778 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-18 02:31:02.706787 | orchestrator | Wednesday 18 March 2026 02:30:58 +0000 (0:00:00.151) 0:00:49.058 ******* 2026-03-18 02:31:02.706796 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706804 | orchestrator | 2026-03-18 02:31:02.706813 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-18 02:31:02.706821 | orchestrator | Wednesday 18 March 2026 02:30:58 +0000 (0:00:00.396) 0:00:49.454 ******* 2026-03-18 02:31:02.706830 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706840 | orchestrator | 2026-03-18 02:31:02.706848 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-18 02:31:02.706857 | orchestrator | Wednesday 18 March 2026 02:30:58 +0000 (0:00:00.150) 0:00:49.605 ******* 2026-03-18 02:31:02.706865 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706873 | orchestrator | 2026-03-18 02:31:02.706882 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-18 02:31:02.706891 | orchestrator | Wednesday 18 March 2026 02:30:58 +0000 (0:00:00.149) 0:00:49.754 ******* 2026-03-18 02:31:02.706899 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706909 | orchestrator | 2026-03-18 02:31:02.706917 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-18 02:31:02.706926 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.159) 0:00:49.913 ******* 2026-03-18 02:31:02.706934 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706943 | orchestrator | 2026-03-18 02:31:02.706952 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-18 02:31:02.706961 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.158) 0:00:50.072 ******* 2026-03-18 02:31:02.706973 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.706980 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.706986 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.706991 | orchestrator | 2026-03-18 02:31:02.706997 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-18 02:31:02.707003 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.180) 0:00:50.252 ******* 2026-03-18 02:31:02.707028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707035 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707041 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707048 | orchestrator | 2026-03-18 02:31:02.707054 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-18 02:31:02.707060 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.173) 0:00:50.426 ******* 2026-03-18 02:31:02.707066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707072 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707079 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707085 | orchestrator | 2026-03-18 02:31:02.707092 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-18 02:31:02.707098 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.172) 0:00:50.599 ******* 2026-03-18 02:31:02.707105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707111 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707118 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707123 | orchestrator | 2026-03-18 02:31:02.707143 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-18 02:31:02.707149 | orchestrator | Wednesday 18 March 2026 02:30:59 +0000 (0:00:00.165) 0:00:50.764 ******* 2026-03-18 02:31:02.707155 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707160 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707166 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707171 | orchestrator | 2026-03-18 02:31:02.707177 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-18 02:31:02.707182 | orchestrator | Wednesday 18 March 2026 02:31:00 +0000 (0:00:00.177) 0:00:50.941 ******* 2026-03-18 02:31:02.707193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707204 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707209 | orchestrator | 2026-03-18 02:31:02.707215 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-18 02:31:02.707220 | orchestrator | Wednesday 18 March 2026 02:31:00 +0000 (0:00:00.189) 0:00:51.130 ******* 2026-03-18 02:31:02.707225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707231 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707236 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707242 | orchestrator | 2026-03-18 02:31:02.707247 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-18 02:31:02.707257 | orchestrator | Wednesday 18 March 2026 02:31:00 +0000 (0:00:00.400) 0:00:51.531 ******* 2026-03-18 02:31:02.707262 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707277 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707286 | orchestrator | 2026-03-18 02:31:02.707298 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-18 02:31:02.707308 | orchestrator | Wednesday 18 March 2026 02:31:00 +0000 (0:00:00.177) 0:00:51.709 ******* 2026-03-18 02:31:02.707319 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:31:02.707328 | orchestrator | 2026-03-18 02:31:02.707335 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-18 02:31:02.707401 | orchestrator | Wednesday 18 March 2026 02:31:01 +0000 (0:00:00.566) 0:00:52.275 ******* 2026-03-18 02:31:02.707413 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:31:02.707422 | orchestrator | 2026-03-18 02:31:02.707431 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-18 02:31:02.707439 | orchestrator | Wednesday 18 March 2026 02:31:01 +0000 (0:00:00.531) 0:00:52.806 ******* 2026-03-18 02:31:02.707448 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:31:02.707457 | orchestrator | 2026-03-18 02:31:02.707465 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-18 02:31:02.707473 | orchestrator | Wednesday 18 March 2026 02:31:02 +0000 (0:00:00.175) 0:00:52.982 ******* 2026-03-18 02:31:02.707482 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'vg_name': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}) 2026-03-18 02:31:02.707491 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'vg_name': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}) 2026-03-18 02:31:02.707499 | orchestrator | 2026-03-18 02:31:02.707507 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-18 02:31:02.707514 | orchestrator | Wednesday 18 March 2026 02:31:02 +0000 (0:00:00.200) 0:00:53.182 ******* 2026-03-18 02:31:02.707522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707530 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:02.707539 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:02.707547 | orchestrator | 2026-03-18 02:31:02.707555 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-18 02:31:02.707563 | orchestrator | Wednesday 18 March 2026 02:31:02 +0000 (0:00:00.197) 0:00:53.380 ******* 2026-03-18 02:31:02.707571 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:02.707586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:09.921632 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:09.921727 | orchestrator | 2026-03-18 02:31:09.921740 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-18 02:31:09.921751 | orchestrator | Wednesday 18 March 2026 02:31:02 +0000 (0:00:00.185) 0:00:53.565 ******* 2026-03-18 02:31:09.921761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 02:31:09.921771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 02:31:09.921801 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:09.921811 | orchestrator | 2026-03-18 02:31:09.921833 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-18 02:31:09.921842 | orchestrator | Wednesday 18 March 2026 02:31:02 +0000 (0:00:00.176) 0:00:53.742 ******* 2026-03-18 02:31:09.921851 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:31:09.921860 | orchestrator |  "lvm_report": { 2026-03-18 02:31:09.921870 | orchestrator |  "lv": [ 2026-03-18 02:31:09.921879 | orchestrator |  { 2026-03-18 02:31:09.921888 | orchestrator |  "lv_name": "osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d", 2026-03-18 02:31:09.921897 | orchestrator |  "vg_name": "ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d" 2026-03-18 02:31:09.921905 | orchestrator |  }, 2026-03-18 02:31:09.921914 | orchestrator |  { 2026-03-18 02:31:09.921923 | orchestrator |  "lv_name": "osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af", 2026-03-18 02:31:09.921931 | orchestrator |  "vg_name": "ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af" 2026-03-18 02:31:09.921940 | orchestrator |  } 2026-03-18 02:31:09.921949 | orchestrator |  ], 2026-03-18 02:31:09.921957 | orchestrator |  "pv": [ 2026-03-18 02:31:09.921966 | orchestrator |  { 2026-03-18 02:31:09.921974 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-18 02:31:09.921983 | orchestrator |  "vg_name": "ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af" 2026-03-18 02:31:09.921992 | orchestrator |  }, 2026-03-18 02:31:09.922001 | orchestrator |  { 2026-03-18 02:31:09.922010 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-18 02:31:09.922088 | orchestrator |  "vg_name": "ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d" 2026-03-18 02:31:09.922097 | orchestrator |  } 2026-03-18 02:31:09.922106 | orchestrator |  ] 2026-03-18 02:31:09.922115 | orchestrator |  } 2026-03-18 02:31:09.922123 | orchestrator | } 2026-03-18 02:31:09.922132 | orchestrator | 2026-03-18 02:31:09.922141 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-18 02:31:09.922150 | orchestrator | 2026-03-18 02:31:09.922159 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 02:31:09.922168 | orchestrator | Wednesday 18 March 2026 02:31:03 +0000 (0:00:00.337) 0:00:54.079 ******* 2026-03-18 02:31:09.922183 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-18 02:31:09.922199 | orchestrator | 2026-03-18 02:31:09.922213 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-18 02:31:09.922227 | orchestrator | Wednesday 18 March 2026 02:31:03 +0000 (0:00:00.748) 0:00:54.827 ******* 2026-03-18 02:31:09.922244 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:09.922260 | orchestrator | 2026-03-18 02:31:09.922274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922283 | orchestrator | Wednesday 18 March 2026 02:31:04 +0000 (0:00:00.331) 0:00:55.158 ******* 2026-03-18 02:31:09.922291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-18 02:31:09.922300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-18 02:31:09.922309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-18 02:31:09.922317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-18 02:31:09.922325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-18 02:31:09.922334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-18 02:31:09.922367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-18 02:31:09.922376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-18 02:31:09.922394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-18 02:31:09.922403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-18 02:31:09.922411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-18 02:31:09.922420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-18 02:31:09.922428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-18 02:31:09.922437 | orchestrator | 2026-03-18 02:31:09.922445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922454 | orchestrator | Wednesday 18 March 2026 02:31:04 +0000 (0:00:00.477) 0:00:55.636 ******* 2026-03-18 02:31:09.922462 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922471 | orchestrator | 2026-03-18 02:31:09.922479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922488 | orchestrator | Wednesday 18 March 2026 02:31:05 +0000 (0:00:00.237) 0:00:55.874 ******* 2026-03-18 02:31:09.922496 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922505 | orchestrator | 2026-03-18 02:31:09.922513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922538 | orchestrator | Wednesday 18 March 2026 02:31:05 +0000 (0:00:00.232) 0:00:56.107 ******* 2026-03-18 02:31:09.922547 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922556 | orchestrator | 2026-03-18 02:31:09.922565 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922573 | orchestrator | Wednesday 18 March 2026 02:31:05 +0000 (0:00:00.227) 0:00:56.334 ******* 2026-03-18 02:31:09.922582 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922591 | orchestrator | 2026-03-18 02:31:09.922599 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922608 | orchestrator | Wednesday 18 March 2026 02:31:05 +0000 (0:00:00.231) 0:00:56.565 ******* 2026-03-18 02:31:09.922617 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922625 | orchestrator | 2026-03-18 02:31:09.922634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922643 | orchestrator | Wednesday 18 March 2026 02:31:05 +0000 (0:00:00.207) 0:00:56.773 ******* 2026-03-18 02:31:09.922652 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922660 | orchestrator | 2026-03-18 02:31:09.922669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922678 | orchestrator | Wednesday 18 March 2026 02:31:06 +0000 (0:00:00.245) 0:00:57.019 ******* 2026-03-18 02:31:09.922686 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922695 | orchestrator | 2026-03-18 02:31:09.922704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922713 | orchestrator | Wednesday 18 March 2026 02:31:06 +0000 (0:00:00.213) 0:00:57.233 ******* 2026-03-18 02:31:09.922721 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:09.922730 | orchestrator | 2026-03-18 02:31:09.922738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922747 | orchestrator | Wednesday 18 March 2026 02:31:07 +0000 (0:00:00.786) 0:00:58.019 ******* 2026-03-18 02:31:09.922756 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403) 2026-03-18 02:31:09.922766 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403) 2026-03-18 02:31:09.922775 | orchestrator | 2026-03-18 02:31:09.922783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922792 | orchestrator | Wednesday 18 March 2026 02:31:07 +0000 (0:00:00.475) 0:00:58.495 ******* 2026-03-18 02:31:09.922878 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00) 2026-03-18 02:31:09.922894 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00) 2026-03-18 02:31:09.922909 | orchestrator | 2026-03-18 02:31:09.922918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922926 | orchestrator | Wednesday 18 March 2026 02:31:08 +0000 (0:00:00.480) 0:00:58.975 ******* 2026-03-18 02:31:09.922935 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568) 2026-03-18 02:31:09.922944 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568) 2026-03-18 02:31:09.922952 | orchestrator | 2026-03-18 02:31:09.922961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.922970 | orchestrator | Wednesday 18 March 2026 02:31:08 +0000 (0:00:00.495) 0:00:59.470 ******* 2026-03-18 02:31:09.922978 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216) 2026-03-18 02:31:09.922987 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216) 2026-03-18 02:31:09.922996 | orchestrator | 2026-03-18 02:31:09.923005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-18 02:31:09.923013 | orchestrator | Wednesday 18 March 2026 02:31:09 +0000 (0:00:00.487) 0:00:59.958 ******* 2026-03-18 02:31:09.923022 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-18 02:31:09.923031 | orchestrator | 2026-03-18 02:31:09.923039 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:09.923048 | orchestrator | Wednesday 18 March 2026 02:31:09 +0000 (0:00:00.369) 0:01:00.328 ******* 2026-03-18 02:31:09.923056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-18 02:31:09.923065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-18 02:31:09.923074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-18 02:31:09.923082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-18 02:31:09.923091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-18 02:31:09.923099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-18 02:31:09.923108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-18 02:31:09.923116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-18 02:31:09.923125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-18 02:31:09.923133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-18 02:31:09.923142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-18 02:31:09.923178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-18 02:31:19.296671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-18 02:31:19.296761 | orchestrator | 2026-03-18 02:31:19.296772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.296781 | orchestrator | Wednesday 18 March 2026 02:31:09 +0000 (0:00:00.444) 0:01:00.772 ******* 2026-03-18 02:31:19.296788 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.296797 | orchestrator | 2026-03-18 02:31:19.296804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.296812 | orchestrator | Wednesday 18 March 2026 02:31:10 +0000 (0:00:00.227) 0:01:01.000 ******* 2026-03-18 02:31:19.296819 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.296826 | orchestrator | 2026-03-18 02:31:19.296847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.296879 | orchestrator | Wednesday 18 March 2026 02:31:10 +0000 (0:00:00.230) 0:01:01.230 ******* 2026-03-18 02:31:19.296890 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.296903 | orchestrator | 2026-03-18 02:31:19.296917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.296930 | orchestrator | Wednesday 18 March 2026 02:31:10 +0000 (0:00:00.242) 0:01:01.472 ******* 2026-03-18 02:31:19.296944 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.296957 | orchestrator | 2026-03-18 02:31:19.296966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.296973 | orchestrator | Wednesday 18 March 2026 02:31:10 +0000 (0:00:00.227) 0:01:01.700 ******* 2026-03-18 02:31:19.296980 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.296987 | orchestrator | 2026-03-18 02:31:19.296994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297002 | orchestrator | Wednesday 18 March 2026 02:31:11 +0000 (0:00:00.743) 0:01:02.443 ******* 2026-03-18 02:31:19.297009 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297016 | orchestrator | 2026-03-18 02:31:19.297023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297030 | orchestrator | Wednesday 18 March 2026 02:31:11 +0000 (0:00:00.220) 0:01:02.664 ******* 2026-03-18 02:31:19.297037 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297044 | orchestrator | 2026-03-18 02:31:19.297051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297058 | orchestrator | Wednesday 18 March 2026 02:31:12 +0000 (0:00:00.211) 0:01:02.876 ******* 2026-03-18 02:31:19.297066 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297073 | orchestrator | 2026-03-18 02:31:19.297080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297087 | orchestrator | Wednesday 18 March 2026 02:31:12 +0000 (0:00:00.241) 0:01:03.117 ******* 2026-03-18 02:31:19.297095 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-18 02:31:19.297103 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-18 02:31:19.297110 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-18 02:31:19.297118 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-18 02:31:19.297125 | orchestrator | 2026-03-18 02:31:19.297132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297139 | orchestrator | Wednesday 18 March 2026 02:31:12 +0000 (0:00:00.719) 0:01:03.836 ******* 2026-03-18 02:31:19.297146 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297153 | orchestrator | 2026-03-18 02:31:19.297161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297168 | orchestrator | Wednesday 18 March 2026 02:31:13 +0000 (0:00:00.266) 0:01:04.102 ******* 2026-03-18 02:31:19.297175 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297182 | orchestrator | 2026-03-18 02:31:19.297189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297196 | orchestrator | Wednesday 18 March 2026 02:31:13 +0000 (0:00:00.221) 0:01:04.324 ******* 2026-03-18 02:31:19.297203 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297210 | orchestrator | 2026-03-18 02:31:19.297218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-18 02:31:19.297225 | orchestrator | Wednesday 18 March 2026 02:31:13 +0000 (0:00:00.229) 0:01:04.553 ******* 2026-03-18 02:31:19.297234 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297242 | orchestrator | 2026-03-18 02:31:19.297250 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-18 02:31:19.297258 | orchestrator | Wednesday 18 March 2026 02:31:13 +0000 (0:00:00.218) 0:01:04.772 ******* 2026-03-18 02:31:19.297267 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297275 | orchestrator | 2026-03-18 02:31:19.297283 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-18 02:31:19.297291 | orchestrator | Wednesday 18 March 2026 02:31:14 +0000 (0:00:00.153) 0:01:04.926 ******* 2026-03-18 02:31:19.297305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'def37aef-ab10-5729-81f7-b9371c5efcea'}}) 2026-03-18 02:31:19.297315 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f498c8c9-64fb-5c46-ab13-dfed2090c41f'}}) 2026-03-18 02:31:19.297324 | orchestrator | 2026-03-18 02:31:19.297370 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-18 02:31:19.297380 | orchestrator | Wednesday 18 March 2026 02:31:14 +0000 (0:00:00.202) 0:01:05.129 ******* 2026-03-18 02:31:19.297388 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}) 2026-03-18 02:31:19.297397 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}) 2026-03-18 02:31:19.297404 | orchestrator | 2026-03-18 02:31:19.297411 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-18 02:31:19.297433 | orchestrator | Wednesday 18 March 2026 02:31:16 +0000 (0:00:01.812) 0:01:06.942 ******* 2026-03-18 02:31:19.297441 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:19.297449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:19.297457 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297464 | orchestrator | 2026-03-18 02:31:19.297471 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-18 02:31:19.297483 | orchestrator | Wednesday 18 March 2026 02:31:16 +0000 (0:00:00.411) 0:01:07.353 ******* 2026-03-18 02:31:19.297490 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}) 2026-03-18 02:31:19.297498 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}) 2026-03-18 02:31:19.297505 | orchestrator | 2026-03-18 02:31:19.297512 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-18 02:31:19.297520 | orchestrator | Wednesday 18 March 2026 02:31:17 +0000 (0:00:01.369) 0:01:08.723 ******* 2026-03-18 02:31:19.297527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:19.297534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:19.297541 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297548 | orchestrator | 2026-03-18 02:31:19.297556 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-18 02:31:19.297563 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.182) 0:01:08.905 ******* 2026-03-18 02:31:19.297570 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297577 | orchestrator | 2026-03-18 02:31:19.297584 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-18 02:31:19.297591 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.160) 0:01:09.066 ******* 2026-03-18 02:31:19.297598 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:19.297606 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:19.297613 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297631 | orchestrator | 2026-03-18 02:31:19.297648 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-18 02:31:19.297662 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.167) 0:01:09.234 ******* 2026-03-18 02:31:19.297675 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297688 | orchestrator | 2026-03-18 02:31:19.297702 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-18 02:31:19.297715 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.159) 0:01:09.393 ******* 2026-03-18 02:31:19.297730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:19.297739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:19.297746 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297753 | orchestrator | 2026-03-18 02:31:19.297760 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-18 02:31:19.297767 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.168) 0:01:09.562 ******* 2026-03-18 02:31:19.297774 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297781 | orchestrator | 2026-03-18 02:31:19.297789 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-18 02:31:19.297796 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.145) 0:01:09.708 ******* 2026-03-18 02:31:19.297803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:19.297810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:19.297818 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:19.297825 | orchestrator | 2026-03-18 02:31:19.297832 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-18 02:31:19.297839 | orchestrator | Wednesday 18 March 2026 02:31:18 +0000 (0:00:00.148) 0:01:09.857 ******* 2026-03-18 02:31:19.297847 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:19.297854 | orchestrator | 2026-03-18 02:31:19.297861 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-18 02:31:19.297868 | orchestrator | Wednesday 18 March 2026 02:31:19 +0000 (0:00:00.137) 0:01:09.994 ******* 2026-03-18 02:31:19.297882 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:26.265470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:26.265620 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.265639 | orchestrator | 2026-03-18 02:31:26.265653 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-18 02:31:26.265665 | orchestrator | Wednesday 18 March 2026 02:31:19 +0000 (0:00:00.160) 0:01:10.155 ******* 2026-03-18 02:31:26.265694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:26.265706 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:26.265716 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.265727 | orchestrator | 2026-03-18 02:31:26.265738 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-18 02:31:26.265749 | orchestrator | Wednesday 18 March 2026 02:31:19 +0000 (0:00:00.165) 0:01:10.321 ******* 2026-03-18 02:31:26.265760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:26.265803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:26.265823 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.265842 | orchestrator | 2026-03-18 02:31:26.265861 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-18 02:31:26.265881 | orchestrator | Wednesday 18 March 2026 02:31:19 +0000 (0:00:00.413) 0:01:10.735 ******* 2026-03-18 02:31:26.265902 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.265922 | orchestrator | 2026-03-18 02:31:26.265941 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-18 02:31:26.265955 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.162) 0:01:10.898 ******* 2026-03-18 02:31:26.265967 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.265980 | orchestrator | 2026-03-18 02:31:26.265993 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-18 02:31:26.266005 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.160) 0:01:11.059 ******* 2026-03-18 02:31:26.266076 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266089 | orchestrator | 2026-03-18 02:31:26.266102 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-18 02:31:26.266114 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.152) 0:01:11.212 ******* 2026-03-18 02:31:26.266124 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:31:26.266136 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-18 02:31:26.266147 | orchestrator | } 2026-03-18 02:31:26.266158 | orchestrator | 2026-03-18 02:31:26.266207 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-18 02:31:26.266218 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.156) 0:01:11.368 ******* 2026-03-18 02:31:26.266229 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:31:26.266240 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-18 02:31:26.266250 | orchestrator | } 2026-03-18 02:31:26.266261 | orchestrator | 2026-03-18 02:31:26.266271 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-18 02:31:26.266282 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.153) 0:01:11.522 ******* 2026-03-18 02:31:26.266292 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:31:26.266303 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-18 02:31:26.266314 | orchestrator | } 2026-03-18 02:31:26.266325 | orchestrator | 2026-03-18 02:31:26.266365 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-18 02:31:26.266376 | orchestrator | Wednesday 18 March 2026 02:31:20 +0000 (0:00:00.159) 0:01:11.682 ******* 2026-03-18 02:31:26.266387 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:26.266398 | orchestrator | 2026-03-18 02:31:26.266408 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-18 02:31:26.266419 | orchestrator | Wednesday 18 March 2026 02:31:21 +0000 (0:00:00.550) 0:01:12.232 ******* 2026-03-18 02:31:26.266430 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:26.266440 | orchestrator | 2026-03-18 02:31:26.266451 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-18 02:31:26.266462 | orchestrator | Wednesday 18 March 2026 02:31:21 +0000 (0:00:00.549) 0:01:12.782 ******* 2026-03-18 02:31:26.266473 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:26.266483 | orchestrator | 2026-03-18 02:31:26.266494 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-18 02:31:26.266505 | orchestrator | Wednesday 18 March 2026 02:31:22 +0000 (0:00:00.539) 0:01:13.321 ******* 2026-03-18 02:31:26.266516 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:26.266527 | orchestrator | 2026-03-18 02:31:26.266537 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-18 02:31:26.266548 | orchestrator | Wednesday 18 March 2026 02:31:22 +0000 (0:00:00.157) 0:01:13.479 ******* 2026-03-18 02:31:26.266570 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266581 | orchestrator | 2026-03-18 02:31:26.266592 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-18 02:31:26.266603 | orchestrator | Wednesday 18 March 2026 02:31:22 +0000 (0:00:00.135) 0:01:13.614 ******* 2026-03-18 02:31:26.266614 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266624 | orchestrator | 2026-03-18 02:31:26.266635 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-18 02:31:26.266646 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.385) 0:01:13.999 ******* 2026-03-18 02:31:26.266657 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:31:26.266668 | orchestrator |  "vgs_report": { 2026-03-18 02:31:26.266679 | orchestrator |  "vg": [] 2026-03-18 02:31:26.266710 | orchestrator |  } 2026-03-18 02:31:26.266723 | orchestrator | } 2026-03-18 02:31:26.266734 | orchestrator | 2026-03-18 02:31:26.266745 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-18 02:31:26.266755 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.176) 0:01:14.175 ******* 2026-03-18 02:31:26.266766 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266777 | orchestrator | 2026-03-18 02:31:26.266787 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-18 02:31:26.266798 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.144) 0:01:14.319 ******* 2026-03-18 02:31:26.266809 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266819 | orchestrator | 2026-03-18 02:31:26.266837 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-18 02:31:26.266848 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.143) 0:01:14.463 ******* 2026-03-18 02:31:26.266859 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266870 | orchestrator | 2026-03-18 02:31:26.266880 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-18 02:31:26.266891 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.145) 0:01:14.609 ******* 2026-03-18 02:31:26.266902 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266912 | orchestrator | 2026-03-18 02:31:26.266923 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-18 02:31:26.266934 | orchestrator | Wednesday 18 March 2026 02:31:23 +0000 (0:00:00.151) 0:01:14.760 ******* 2026-03-18 02:31:26.266945 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266955 | orchestrator | 2026-03-18 02:31:26.266966 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-18 02:31:26.266977 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.155) 0:01:14.916 ******* 2026-03-18 02:31:26.266987 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.266998 | orchestrator | 2026-03-18 02:31:26.267009 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-18 02:31:26.267020 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.137) 0:01:15.054 ******* 2026-03-18 02:31:26.267030 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267041 | orchestrator | 2026-03-18 02:31:26.267051 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-18 02:31:26.267062 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.140) 0:01:15.194 ******* 2026-03-18 02:31:26.267073 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267083 | orchestrator | 2026-03-18 02:31:26.267094 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-18 02:31:26.267105 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.145) 0:01:15.340 ******* 2026-03-18 02:31:26.267116 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267126 | orchestrator | 2026-03-18 02:31:26.267137 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-18 02:31:26.267148 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.188) 0:01:15.528 ******* 2026-03-18 02:31:26.267159 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267176 | orchestrator | 2026-03-18 02:31:26.267187 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-18 02:31:26.267198 | orchestrator | Wednesday 18 March 2026 02:31:24 +0000 (0:00:00.143) 0:01:15.672 ******* 2026-03-18 02:31:26.267208 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267219 | orchestrator | 2026-03-18 02:31:26.267229 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-18 02:31:26.267240 | orchestrator | Wednesday 18 March 2026 02:31:25 +0000 (0:00:00.398) 0:01:16.070 ******* 2026-03-18 02:31:26.267251 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267261 | orchestrator | 2026-03-18 02:31:26.267272 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-18 02:31:26.267283 | orchestrator | Wednesday 18 March 2026 02:31:25 +0000 (0:00:00.153) 0:01:16.224 ******* 2026-03-18 02:31:26.267293 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267304 | orchestrator | 2026-03-18 02:31:26.267315 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-18 02:31:26.267325 | orchestrator | Wednesday 18 March 2026 02:31:25 +0000 (0:00:00.158) 0:01:16.382 ******* 2026-03-18 02:31:26.267359 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267370 | orchestrator | 2026-03-18 02:31:26.267381 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-18 02:31:26.267392 | orchestrator | Wednesday 18 March 2026 02:31:25 +0000 (0:00:00.168) 0:01:16.550 ******* 2026-03-18 02:31:26.267402 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:26.267414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:26.267424 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267435 | orchestrator | 2026-03-18 02:31:26.267446 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-18 02:31:26.267457 | orchestrator | Wednesday 18 March 2026 02:31:25 +0000 (0:00:00.203) 0:01:16.754 ******* 2026-03-18 02:31:26.267467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:26.267478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:26.267489 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:26.267500 | orchestrator | 2026-03-18 02:31:26.267510 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-18 02:31:26.267521 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.177) 0:01:16.931 ******* 2026-03-18 02:31:26.267540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625195 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625210 | orchestrator | 2026-03-18 02:31:29.625223 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-18 02:31:29.625235 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.193) 0:01:17.124 ******* 2026-03-18 02:31:29.625264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625276 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625287 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625298 | orchestrator | 2026-03-18 02:31:29.625386 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-18 02:31:29.625399 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.164) 0:01:17.289 ******* 2026-03-18 02:31:29.625410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625432 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625442 | orchestrator | 2026-03-18 02:31:29.625453 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-18 02:31:29.625464 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.194) 0:01:17.484 ******* 2026-03-18 02:31:29.625474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625485 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625495 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625506 | orchestrator | 2026-03-18 02:31:29.625517 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-18 02:31:29.625527 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.193) 0:01:17.678 ******* 2026-03-18 02:31:29.625538 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625559 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625570 | orchestrator | 2026-03-18 02:31:29.625580 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-18 02:31:29.625591 | orchestrator | Wednesday 18 March 2026 02:31:26 +0000 (0:00:00.171) 0:01:17.849 ******* 2026-03-18 02:31:29.625601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625624 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625635 | orchestrator | 2026-03-18 02:31:29.625648 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-18 02:31:29.625660 | orchestrator | Wednesday 18 March 2026 02:31:27 +0000 (0:00:00.171) 0:01:18.020 ******* 2026-03-18 02:31:29.625672 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:29.625684 | orchestrator | 2026-03-18 02:31:29.625696 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-18 02:31:29.625708 | orchestrator | Wednesday 18 March 2026 02:31:27 +0000 (0:00:00.823) 0:01:18.844 ******* 2026-03-18 02:31:29.625720 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:29.625732 | orchestrator | 2026-03-18 02:31:29.625744 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-18 02:31:29.625756 | orchestrator | Wednesday 18 March 2026 02:31:28 +0000 (0:00:00.540) 0:01:19.384 ******* 2026-03-18 02:31:29.625768 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:29.625780 | orchestrator | 2026-03-18 02:31:29.625792 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-18 02:31:29.625803 | orchestrator | Wednesday 18 March 2026 02:31:28 +0000 (0:00:00.164) 0:01:19.549 ******* 2026-03-18 02:31:29.625815 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'vg_name': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}) 2026-03-18 02:31:29.625838 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'vg_name': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}) 2026-03-18 02:31:29.625850 | orchestrator | 2026-03-18 02:31:29.625862 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-18 02:31:29.625874 | orchestrator | Wednesday 18 March 2026 02:31:28 +0000 (0:00:00.213) 0:01:19.763 ******* 2026-03-18 02:31:29.625903 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625916 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.625928 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.625941 | orchestrator | 2026-03-18 02:31:29.625958 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-18 02:31:29.625970 | orchestrator | Wednesday 18 March 2026 02:31:29 +0000 (0:00:00.175) 0:01:19.939 ******* 2026-03-18 02:31:29.625983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.625993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.626004 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.626079 | orchestrator | 2026-03-18 02:31:29.626095 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-18 02:31:29.626106 | orchestrator | Wednesday 18 March 2026 02:31:29 +0000 (0:00:00.174) 0:01:20.114 ******* 2026-03-18 02:31:29.626116 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 02:31:29.626127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 02:31:29.626138 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:29.626149 | orchestrator | 2026-03-18 02:31:29.626160 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-18 02:31:29.626170 | orchestrator | Wednesday 18 March 2026 02:31:29 +0000 (0:00:00.189) 0:01:20.304 ******* 2026-03-18 02:31:29.626181 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:31:29.626192 | orchestrator |  "lvm_report": { 2026-03-18 02:31:29.626203 | orchestrator |  "lv": [ 2026-03-18 02:31:29.626213 | orchestrator |  { 2026-03-18 02:31:29.626224 | orchestrator |  "lv_name": "osd-block-def37aef-ab10-5729-81f7-b9371c5efcea", 2026-03-18 02:31:29.626236 | orchestrator |  "vg_name": "ceph-def37aef-ab10-5729-81f7-b9371c5efcea" 2026-03-18 02:31:29.626246 | orchestrator |  }, 2026-03-18 02:31:29.626257 | orchestrator |  { 2026-03-18 02:31:29.626268 | orchestrator |  "lv_name": "osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f", 2026-03-18 02:31:29.626279 | orchestrator |  "vg_name": "ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f" 2026-03-18 02:31:29.626289 | orchestrator |  } 2026-03-18 02:31:29.626300 | orchestrator |  ], 2026-03-18 02:31:29.626311 | orchestrator |  "pv": [ 2026-03-18 02:31:29.626321 | orchestrator |  { 2026-03-18 02:31:29.626350 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-18 02:31:29.626362 | orchestrator |  "vg_name": "ceph-def37aef-ab10-5729-81f7-b9371c5efcea" 2026-03-18 02:31:29.626372 | orchestrator |  }, 2026-03-18 02:31:29.626383 | orchestrator |  { 2026-03-18 02:31:29.626393 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-18 02:31:29.626404 | orchestrator |  "vg_name": "ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f" 2026-03-18 02:31:29.626427 | orchestrator |  } 2026-03-18 02:31:29.626438 | orchestrator |  ] 2026-03-18 02:31:29.626449 | orchestrator |  } 2026-03-18 02:31:29.626460 | orchestrator | } 2026-03-18 02:31:29.626470 | orchestrator | 2026-03-18 02:31:29.626481 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:31:29.626492 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-18 02:31:29.626503 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-18 02:31:29.626513 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-18 02:31:29.626524 | orchestrator | 2026-03-18 02:31:29.626535 | orchestrator | 2026-03-18 02:31:29.626545 | orchestrator | 2026-03-18 02:31:29.626556 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:31:29.626566 | orchestrator | Wednesday 18 March 2026 02:31:29 +0000 (0:00:00.157) 0:01:20.461 ******* 2026-03-18 02:31:29.626577 | orchestrator | =============================================================================== 2026-03-18 02:31:29.626587 | orchestrator | Create block VGs -------------------------------------------------------- 5.81s 2026-03-18 02:31:29.626598 | orchestrator | Create block LVs -------------------------------------------------------- 4.29s 2026-03-18 02:31:29.626608 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.92s 2026-03-18 02:31:29.626619 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2026-03-18 02:31:29.626629 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2026-03-18 02:31:29.626640 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.63s 2026-03-18 02:31:29.626650 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-03-18 02:31:29.626661 | orchestrator | Add known links to the list of available block devices ------------------ 1.52s 2026-03-18 02:31:29.626680 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2026-03-18 02:31:30.055047 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.30s 2026-03-18 02:31:30.055186 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-03-18 02:31:30.055203 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-03-18 02:31:30.055215 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.88s 2026-03-18 02:31:30.055254 | orchestrator | Get initial list of available block devices ----------------------------- 0.83s 2026-03-18 02:31:30.055276 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.83s 2026-03-18 02:31:30.055296 | orchestrator | Print LVM report data --------------------------------------------------- 0.82s 2026-03-18 02:31:30.055315 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-03-18 02:31:30.055395 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.78s 2026-03-18 02:31:30.055416 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-03-18 02:31:30.055434 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-03-18 02:31:42.667727 | orchestrator | 2026-03-18 02:31:42 | INFO  | Task b5a3ded9-133f-4417-a8b1-f0d87971f6f6 (facts) was prepared for execution. 2026-03-18 02:31:42.667828 | orchestrator | 2026-03-18 02:31:42 | INFO  | It takes a moment until task b5a3ded9-133f-4417-a8b1-f0d87971f6f6 (facts) has been started and output is visible here. 2026-03-18 02:31:56.391911 | orchestrator | 2026-03-18 02:31:56.392019 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-18 02:31:56.392044 | orchestrator | 2026-03-18 02:31:56.392062 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-18 02:31:56.392108 | orchestrator | Wednesday 18 March 2026 02:31:47 +0000 (0:00:00.321) 0:00:00.321 ******* 2026-03-18 02:31:56.392119 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:31:56.392129 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:31:56.392142 | orchestrator | ok: [testbed-manager] 2026-03-18 02:31:56.392158 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:31:56.392174 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:31:56.392190 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:31:56.392208 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:56.392225 | orchestrator | 2026-03-18 02:31:56.392240 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-18 02:31:56.392250 | orchestrator | Wednesday 18 March 2026 02:31:48 +0000 (0:00:01.254) 0:00:01.575 ******* 2026-03-18 02:31:56.392259 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:31:56.392270 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:31:56.392279 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:31:56.392289 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:31:56.392298 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:31:56.392307 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:56.392352 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:56.392370 | orchestrator | 2026-03-18 02:31:56.392386 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 02:31:56.392404 | orchestrator | 2026-03-18 02:31:56.392423 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 02:31:56.392441 | orchestrator | Wednesday 18 March 2026 02:31:49 +0000 (0:00:01.410) 0:00:02.986 ******* 2026-03-18 02:31:56.392456 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:31:56.392468 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:31:56.392479 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:31:56.392490 | orchestrator | ok: [testbed-manager] 2026-03-18 02:31:56.392501 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:31:56.392512 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:31:56.392522 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:31:56.392533 | orchestrator | 2026-03-18 02:31:56.392544 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-18 02:31:56.392555 | orchestrator | 2026-03-18 02:31:56.392566 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-18 02:31:56.392578 | orchestrator | Wednesday 18 March 2026 02:31:55 +0000 (0:00:05.258) 0:00:08.244 ******* 2026-03-18 02:31:56.392589 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:31:56.392600 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:31:56.392610 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:31:56.392621 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:31:56.392632 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:31:56.392643 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:31:56.392653 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:31:56.392664 | orchestrator | 2026-03-18 02:31:56.392675 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:31:56.392687 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392699 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392711 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392722 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392733 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392744 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392764 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:31:56.392775 | orchestrator | 2026-03-18 02:31:56.392786 | orchestrator | 2026-03-18 02:31:56.392797 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:31:56.392808 | orchestrator | Wednesday 18 March 2026 02:31:55 +0000 (0:00:00.626) 0:00:08.871 ******* 2026-03-18 02:31:56.392818 | orchestrator | =============================================================================== 2026-03-18 02:31:56.392844 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2026-03-18 02:31:56.392856 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2026-03-18 02:31:56.392865 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-03-18 02:31:56.392875 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-03-18 02:31:59.037145 | orchestrator | 2026-03-18 02:31:59 | INFO  | Task 4ec8f506-12b8-4249-9dcc-c01c1190ff08 (ceph) was prepared for execution. 2026-03-18 02:31:59.037239 | orchestrator | 2026-03-18 02:31:59 | INFO  | It takes a moment until task 4ec8f506-12b8-4249-9dcc-c01c1190ff08 (ceph) has been started and output is visible here. 2026-03-18 02:32:18.529544 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 02:32:18.529627 | orchestrator | 2.16.14 2026-03-18 02:32:18.529634 | orchestrator | 2026-03-18 02:32:18.529639 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-18 02:32:18.529644 | orchestrator | 2026-03-18 02:32:18.529648 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 02:32:18.529653 | orchestrator | Wednesday 18 March 2026 02:32:04 +0000 (0:00:00.916) 0:00:00.916 ******* 2026-03-18 02:32:18.529658 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:32:18.529663 | orchestrator | 2026-03-18 02:32:18.529667 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 02:32:18.529671 | orchestrator | Wednesday 18 March 2026 02:32:05 +0000 (0:00:01.256) 0:00:02.172 ******* 2026-03-18 02:32:18.529675 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529679 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529682 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529686 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529690 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529693 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529697 | orchestrator | 2026-03-18 02:32:18.529701 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 02:32:18.529705 | orchestrator | Wednesday 18 March 2026 02:32:07 +0000 (0:00:01.362) 0:00:03.535 ******* 2026-03-18 02:32:18.529709 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529713 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529717 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529720 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529724 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529728 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529732 | orchestrator | 2026-03-18 02:32:18.529735 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 02:32:18.529739 | orchestrator | Wednesday 18 March 2026 02:32:08 +0000 (0:00:00.862) 0:00:04.398 ******* 2026-03-18 02:32:18.529743 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529746 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529750 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529754 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529757 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529761 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529779 | orchestrator | 2026-03-18 02:32:18.529784 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 02:32:18.529787 | orchestrator | Wednesday 18 March 2026 02:32:09 +0000 (0:00:00.978) 0:00:05.376 ******* 2026-03-18 02:32:18.529791 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529795 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529798 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529802 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529806 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529809 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529813 | orchestrator | 2026-03-18 02:32:18.529817 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 02:32:18.529821 | orchestrator | Wednesday 18 March 2026 02:32:09 +0000 (0:00:00.842) 0:00:06.218 ******* 2026-03-18 02:32:18.529824 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529828 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529832 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529835 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529839 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529843 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529846 | orchestrator | 2026-03-18 02:32:18.529850 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 02:32:18.529854 | orchestrator | Wednesday 18 March 2026 02:32:10 +0000 (0:00:00.652) 0:00:06.871 ******* 2026-03-18 02:32:18.529857 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529861 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529865 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529869 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529875 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529880 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529890 | orchestrator | 2026-03-18 02:32:18.529896 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 02:32:18.529902 | orchestrator | Wednesday 18 March 2026 02:32:11 +0000 (0:00:00.931) 0:00:07.803 ******* 2026-03-18 02:32:18.529908 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:18.529915 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:18.529920 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:18.529927 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:18.529932 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:18.529939 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:18.529944 | orchestrator | 2026-03-18 02:32:18.529949 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 02:32:18.529955 | orchestrator | Wednesday 18 March 2026 02:32:12 +0000 (0:00:00.651) 0:00:08.454 ******* 2026-03-18 02:32:18.529961 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.529966 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.529972 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.529978 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.529984 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.529989 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.529995 | orchestrator | 2026-03-18 02:32:18.530076 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 02:32:18.530085 | orchestrator | Wednesday 18 March 2026 02:32:13 +0000 (0:00:00.828) 0:00:09.283 ******* 2026-03-18 02:32:18.530091 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:32:18.530095 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:32:18.530099 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:32:18.530103 | orchestrator | 2026-03-18 02:32:18.530106 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 02:32:18.530111 | orchestrator | Wednesday 18 March 2026 02:32:13 +0000 (0:00:00.671) 0:00:09.955 ******* 2026-03-18 02:32:18.530115 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:18.530119 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:18.530129 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:18.530144 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:18.530149 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:18.530153 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:18.530157 | orchestrator | 2026-03-18 02:32:18.530161 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 02:32:18.530165 | orchestrator | Wednesday 18 March 2026 02:32:14 +0000 (0:00:00.807) 0:00:10.763 ******* 2026-03-18 02:32:18.530170 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:32:18.530174 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:32:18.530178 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:32:18.530182 | orchestrator | 2026-03-18 02:32:18.530187 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 02:32:18.530191 | orchestrator | Wednesday 18 March 2026 02:32:17 +0000 (0:00:02.506) 0:00:13.269 ******* 2026-03-18 02:32:18.530196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 02:32:18.530200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 02:32:18.530205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 02:32:18.530209 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:18.530213 | orchestrator | 2026-03-18 02:32:18.530218 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 02:32:18.530222 | orchestrator | Wednesday 18 March 2026 02:32:17 +0000 (0:00:00.427) 0:00:13.696 ******* 2026-03-18 02:32:18.530270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530286 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:18.530290 | orchestrator | 2026-03-18 02:32:18.530294 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 02:32:18.530310 | orchestrator | Wednesday 18 March 2026 02:32:18 +0000 (0:00:00.700) 0:00:14.397 ******* 2026-03-18 02:32:18.530316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:18.530335 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:18.530339 | orchestrator | 2026-03-18 02:32:18.530343 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 02:32:18.530351 | orchestrator | Wednesday 18 March 2026 02:32:18 +0000 (0:00:00.180) 0:00:14.578 ******* 2026-03-18 02:32:18.530362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 02:32:15.508033', 'end': '2026-03-18 02:32:15.548891', 'delta': '0:00:00.040858', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 02:32:29.117235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 02:32:16.079201', 'end': '2026-03-18 02:32:16.121487', 'delta': '0:00:00.042286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 02:32:29.117355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 02:32:16.592413', 'end': '2026-03-18 02:32:16.641399', 'delta': '0:00:00.048986', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 02:32:29.117363 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117369 | orchestrator | 2026-03-18 02:32:29.117374 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 02:32:29.117379 | orchestrator | Wednesday 18 March 2026 02:32:18 +0000 (0:00:00.192) 0:00:14.771 ******* 2026-03-18 02:32:29.117384 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:29.117389 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:29.117393 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:29.117396 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:29.117400 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:29.117404 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:29.117408 | orchestrator | 2026-03-18 02:32:29.117412 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 02:32:29.117416 | orchestrator | Wednesday 18 March 2026 02:32:19 +0000 (0:00:00.796) 0:00:15.567 ******* 2026-03-18 02:32:29.117455 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:32:29.117460 | orchestrator | 2026-03-18 02:32:29.117464 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 02:32:29.117468 | orchestrator | Wednesday 18 March 2026 02:32:20 +0000 (0:00:00.887) 0:00:16.455 ******* 2026-03-18 02:32:29.117472 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117477 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117496 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117500 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117504 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117508 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117512 | orchestrator | 2026-03-18 02:32:29.117516 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 02:32:29.117519 | orchestrator | Wednesday 18 March 2026 02:32:21 +0000 (0:00:00.903) 0:00:17.358 ******* 2026-03-18 02:32:29.117523 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117527 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117531 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117535 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117538 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117542 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117546 | orchestrator | 2026-03-18 02:32:29.117549 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 02:32:29.117553 | orchestrator | Wednesday 18 March 2026 02:32:22 +0000 (0:00:01.274) 0:00:18.633 ******* 2026-03-18 02:32:29.117557 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117561 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117564 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117571 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117575 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117579 | orchestrator | 2026-03-18 02:32:29.117582 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 02:32:29.117596 | orchestrator | Wednesday 18 March 2026 02:32:23 +0000 (0:00:00.627) 0:00:19.261 ******* 2026-03-18 02:32:29.117600 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117603 | orchestrator | 2026-03-18 02:32:29.117607 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 02:32:29.117611 | orchestrator | Wednesday 18 March 2026 02:32:23 +0000 (0:00:00.165) 0:00:19.426 ******* 2026-03-18 02:32:29.117615 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117618 | orchestrator | 2026-03-18 02:32:29.117622 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 02:32:29.117626 | orchestrator | Wednesday 18 March 2026 02:32:23 +0000 (0:00:00.248) 0:00:19.674 ******* 2026-03-18 02:32:29.117629 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117633 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117637 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117640 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117644 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117648 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117652 | orchestrator | 2026-03-18 02:32:29.117666 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 02:32:29.117670 | orchestrator | Wednesday 18 March 2026 02:32:24 +0000 (0:00:00.873) 0:00:20.548 ******* 2026-03-18 02:32:29.117674 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117678 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117682 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117685 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117689 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117693 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117696 | orchestrator | 2026-03-18 02:32:29.117700 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 02:32:29.117704 | orchestrator | Wednesday 18 March 2026 02:32:24 +0000 (0:00:00.633) 0:00:21.182 ******* 2026-03-18 02:32:29.117708 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117711 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117715 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117719 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117722 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117730 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117734 | orchestrator | 2026-03-18 02:32:29.117745 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 02:32:29.117749 | orchestrator | Wednesday 18 March 2026 02:32:25 +0000 (0:00:00.846) 0:00:22.029 ******* 2026-03-18 02:32:29.117753 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117757 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117760 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117764 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117767 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117771 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117775 | orchestrator | 2026-03-18 02:32:29.117785 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 02:32:29.117788 | orchestrator | Wednesday 18 March 2026 02:32:26 +0000 (0:00:00.671) 0:00:22.701 ******* 2026-03-18 02:32:29.117792 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117796 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117799 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117804 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117808 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117812 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117816 | orchestrator | 2026-03-18 02:32:29.117820 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 02:32:29.117825 | orchestrator | Wednesday 18 March 2026 02:32:27 +0000 (0:00:00.912) 0:00:23.614 ******* 2026-03-18 02:32:29.117829 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117833 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117837 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117842 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117846 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117850 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117854 | orchestrator | 2026-03-18 02:32:29.117859 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 02:32:29.117864 | orchestrator | Wednesday 18 March 2026 02:32:28 +0000 (0:00:00.668) 0:00:24.282 ******* 2026-03-18 02:32:29.117868 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.117872 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.117876 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.117881 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:29.117885 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:29.117890 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:29.117894 | orchestrator | 2026-03-18 02:32:29.117898 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 02:32:29.117903 | orchestrator | Wednesday 18 March 2026 02:32:28 +0000 (0:00:00.923) 0:00:25.206 ******* 2026-03-18 02:32:29.117908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.117916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.117928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.227962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.227996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.228007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.228016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.228030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.228056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.447543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.447723 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:29.447751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.447762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.447780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.447798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.540379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.540460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540536 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.540585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.540601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.540613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.883682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.883797 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:29.883817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.883860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.883881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.883906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.883922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.883946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.883967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.884006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.884023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.884051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.884084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:29.884095 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:29.884105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.884116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:29.884134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:30.172643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:30.172667 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:30.172682 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:30.172696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.172783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:32:30.408436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:30.408543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:32:30.408561 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:30.408576 | orchestrator | 2026-03-18 02:32:30.408588 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 02:32:30.408601 | orchestrator | Wednesday 18 March 2026 02:32:30 +0000 (0:00:01.206) 0:00:26.413 ******* 2026-03-18 02:32:30.408614 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.408813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465575 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.465624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853374 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:30.853387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853421 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853434 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:30.853557 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105956 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.105998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.106007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.106072 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.106083 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:31.106094 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.106123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.185930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186116 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186370 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186430 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186487 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.186514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360805 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360931 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:31.360942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360968 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360976 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360984 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.360992 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.361005 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.361018 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.361026 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.361039 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615712 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615818 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:31.615828 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:31.615834 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615839 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615843 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615847 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615870 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615879 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615883 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615888 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:31.615903 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:32:44.752109 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752207 | orchestrator | 2026-03-18 02:32:44.752217 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 02:32:44.752225 | orchestrator | Wednesday 18 March 2026 02:32:31 +0000 (0:00:01.439) 0:00:27.853 ******* 2026-03-18 02:32:44.752232 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:44.752239 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:44.752246 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:44.752252 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:44.752259 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:44.752265 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:44.752271 | orchestrator | 2026-03-18 02:32:44.752278 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 02:32:44.752319 | orchestrator | Wednesday 18 March 2026 02:32:32 +0000 (0:00:01.051) 0:00:28.904 ******* 2026-03-18 02:32:44.752326 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:44.752333 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:44.752339 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:44.752345 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:32:44.752352 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:32:44.752358 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:32:44.752364 | orchestrator | 2026-03-18 02:32:44.752370 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 02:32:44.752377 | orchestrator | Wednesday 18 March 2026 02:32:33 +0000 (0:00:00.910) 0:00:29.815 ******* 2026-03-18 02:32:44.752383 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.752389 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.752396 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.752402 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752408 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.752414 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752421 | orchestrator | 2026-03-18 02:32:44.752427 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 02:32:44.752433 | orchestrator | Wednesday 18 March 2026 02:32:34 +0000 (0:00:00.654) 0:00:30.469 ******* 2026-03-18 02:32:44.752440 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.752447 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.752453 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.752459 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752465 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.752471 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752477 | orchestrator | 2026-03-18 02:32:44.752484 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 02:32:44.752490 | orchestrator | Wednesday 18 March 2026 02:32:35 +0000 (0:00:01.014) 0:00:31.484 ******* 2026-03-18 02:32:44.752496 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.752502 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.752508 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.752515 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752521 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.752546 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752552 | orchestrator | 2026-03-18 02:32:44.752558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 02:32:44.752565 | orchestrator | Wednesday 18 March 2026 02:32:35 +0000 (0:00:00.733) 0:00:32.217 ******* 2026-03-18 02:32:44.752571 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.752577 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.752583 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.752589 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752595 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.752601 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752607 | orchestrator | 2026-03-18 02:32:44.752614 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 02:32:44.752620 | orchestrator | Wednesday 18 March 2026 02:32:36 +0000 (0:00:00.939) 0:00:33.157 ******* 2026-03-18 02:32:44.752626 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 02:32:44.752633 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 02:32:44.752639 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 02:32:44.752646 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 02:32:44.752652 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 02:32:44.752658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 02:32:44.752664 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 02:32:44.752670 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 02:32:44.752677 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-18 02:32:44.752684 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 02:32:44.752690 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 02:32:44.752697 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 02:32:44.752703 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 02:32:44.752710 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-18 02:32:44.752716 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 02:32:44.752723 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-18 02:32:44.752729 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-18 02:32:44.752735 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 02:32:44.752742 | orchestrator | 2026-03-18 02:32:44.752761 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 02:32:44.752767 | orchestrator | Wednesday 18 March 2026 02:32:38 +0000 (0:00:01.901) 0:00:35.059 ******* 2026-03-18 02:32:44.752774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 02:32:44.752781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 02:32:44.752787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 02:32:44.752794 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.752801 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 02:32:44.752807 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 02:32:44.752813 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 02:32:44.752832 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.752839 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 02:32:44.752845 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 02:32:44.752851 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 02:32:44.752857 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.752863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 02:32:44.752869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 02:32:44.752875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 02:32:44.752886 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 02:32:44.752899 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 02:32:44.752905 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 02:32:44.752911 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.752917 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 02:32:44.752923 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 02:32:44.752929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 02:32:44.752936 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.752942 | orchestrator | 2026-03-18 02:32:44.752948 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 02:32:44.752954 | orchestrator | Wednesday 18 March 2026 02:32:39 +0000 (0:00:01.094) 0:00:36.153 ******* 2026-03-18 02:32:44.752989 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:32:44.752996 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:32:44.753002 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:32:44.753009 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:32:44.753016 | orchestrator | 2026-03-18 02:32:44.753022 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 02:32:44.753030 | orchestrator | Wednesday 18 March 2026 02:32:41 +0000 (0:00:01.168) 0:00:37.322 ******* 2026-03-18 02:32:44.753036 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753042 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.753048 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.753054 | orchestrator | 2026-03-18 02:32:44.753061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 02:32:44.753067 | orchestrator | Wednesday 18 March 2026 02:32:41 +0000 (0:00:00.452) 0:00:37.775 ******* 2026-03-18 02:32:44.753073 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753079 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.753085 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.753091 | orchestrator | 2026-03-18 02:32:44.753098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 02:32:44.753104 | orchestrator | Wednesday 18 March 2026 02:32:41 +0000 (0:00:00.369) 0:00:38.145 ******* 2026-03-18 02:32:44.753110 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753116 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:32:44.753121 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:32:44.753127 | orchestrator | 2026-03-18 02:32:44.753133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 02:32:44.753138 | orchestrator | Wednesday 18 March 2026 02:32:42 +0000 (0:00:00.364) 0:00:38.509 ******* 2026-03-18 02:32:44.753144 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:44.753149 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:44.753155 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:44.753160 | orchestrator | 2026-03-18 02:32:44.753165 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 02:32:44.753171 | orchestrator | Wednesday 18 March 2026 02:32:43 +0000 (0:00:00.799) 0:00:39.309 ******* 2026-03-18 02:32:44.753177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:32:44.753183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:32:44.753190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:32:44.753196 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753202 | orchestrator | 2026-03-18 02:32:44.753208 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 02:32:44.753214 | orchestrator | Wednesday 18 March 2026 02:32:43 +0000 (0:00:00.458) 0:00:39.768 ******* 2026-03-18 02:32:44.753220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:32:44.753231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:32:44.753237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:32:44.753243 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753250 | orchestrator | 2026-03-18 02:32:44.753256 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 02:32:44.753262 | orchestrator | Wednesday 18 March 2026 02:32:43 +0000 (0:00:00.430) 0:00:40.199 ******* 2026-03-18 02:32:44.753268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:32:44.753278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:32:44.753297 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:32:44.753304 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:32:44.753311 | orchestrator | 2026-03-18 02:32:44.753317 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 02:32:44.753323 | orchestrator | Wednesday 18 March 2026 02:32:44 +0000 (0:00:00.440) 0:00:40.639 ******* 2026-03-18 02:32:44.753329 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:32:44.753335 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:32:44.753341 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:32:44.753348 | orchestrator | 2026-03-18 02:32:44.753354 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 02:32:44.753364 | orchestrator | Wednesday 18 March 2026 02:32:44 +0000 (0:00:00.355) 0:00:40.995 ******* 2026-03-18 02:33:05.870946 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 02:33:05.871086 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 02:33:05.871102 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 02:33:05.871113 | orchestrator | 2026-03-18 02:33:05.871125 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 02:33:05.871136 | orchestrator | Wednesday 18 March 2026 02:32:45 +0000 (0:00:01.107) 0:00:42.102 ******* 2026-03-18 02:33:05.871147 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:33:05.871158 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:33:05.871169 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:33:05.871180 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 02:33:05.871190 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 02:33:05.871200 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 02:33:05.871209 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 02:33:05.871218 | orchestrator | 2026-03-18 02:33:05.871227 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 02:33:05.871236 | orchestrator | Wednesday 18 March 2026 02:32:46 +0000 (0:00:00.849) 0:00:42.951 ******* 2026-03-18 02:33:05.871245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:33:05.871254 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:33:05.871263 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:33:05.871320 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 02:33:05.871332 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 02:33:05.871342 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 02:33:05.871351 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 02:33:05.871360 | orchestrator | 2026-03-18 02:33:05.871369 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:33:05.871379 | orchestrator | Wednesday 18 March 2026 02:32:48 +0000 (0:00:02.243) 0:00:45.195 ******* 2026-03-18 02:33:05.871411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:33:05.871422 | orchestrator | 2026-03-18 02:33:05.871432 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:33:05.871440 | orchestrator | Wednesday 18 March 2026 02:32:50 +0000 (0:00:01.341) 0:00:46.536 ******* 2026-03-18 02:33:05.871449 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:33:05.871458 | orchestrator | 2026-03-18 02:33:05.871467 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:33:05.871476 | orchestrator | Wednesday 18 March 2026 02:32:51 +0000 (0:00:01.344) 0:00:47.881 ******* 2026-03-18 02:33:05.871485 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.871494 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.871504 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.871513 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:33:05.871522 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:33:05.871531 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:33:05.871540 | orchestrator | 2026-03-18 02:33:05.871550 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:33:05.871559 | orchestrator | Wednesday 18 March 2026 02:32:52 +0000 (0:00:01.264) 0:00:49.145 ******* 2026-03-18 02:33:05.871568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.871577 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.871586 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.871596 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.871604 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.871613 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.871622 | orchestrator | 2026-03-18 02:33:05.871631 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:33:05.871641 | orchestrator | Wednesday 18 March 2026 02:32:53 +0000 (0:00:00.733) 0:00:49.879 ******* 2026-03-18 02:33:05.871650 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.871659 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.871668 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.871676 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.871685 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.871694 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.871703 | orchestrator | 2026-03-18 02:33:05.871712 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:33:05.871738 | orchestrator | Wednesday 18 March 2026 02:32:54 +0000 (0:00:00.978) 0:00:50.858 ******* 2026-03-18 02:33:05.871747 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.871757 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.871766 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.871775 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.871784 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.871794 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.871803 | orchestrator | 2026-03-18 02:33:05.871811 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:33:05.871817 | orchestrator | Wednesday 18 March 2026 02:32:55 +0000 (0:00:00.748) 0:00:51.606 ******* 2026-03-18 02:33:05.871822 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.871828 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.871850 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.871856 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:33:05.871861 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:33:05.871867 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:33:05.871872 | orchestrator | 2026-03-18 02:33:05.871877 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:33:05.871883 | orchestrator | Wednesday 18 March 2026 02:32:56 +0000 (0:00:01.270) 0:00:52.877 ******* 2026-03-18 02:33:05.871897 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.871903 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.871908 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.871914 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.871919 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.871924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.871930 | orchestrator | 2026-03-18 02:33:05.871935 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:33:05.871941 | orchestrator | Wednesday 18 March 2026 02:32:57 +0000 (0:00:00.665) 0:00:53.543 ******* 2026-03-18 02:33:05.871946 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.871951 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.871957 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.871962 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.871967 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.871972 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.871978 | orchestrator | 2026-03-18 02:33:05.871983 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:33:05.871989 | orchestrator | Wednesday 18 March 2026 02:32:58 +0000 (0:00:00.918) 0:00:54.461 ******* 2026-03-18 02:33:05.871994 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.871999 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.872005 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.872010 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:33:05.872015 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:33:05.872021 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:33:05.872026 | orchestrator | 2026-03-18 02:33:05.872032 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:33:05.872037 | orchestrator | Wednesday 18 March 2026 02:32:59 +0000 (0:00:01.107) 0:00:55.569 ******* 2026-03-18 02:33:05.872042 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.872047 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.872053 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.872058 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:33:05.872063 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:33:05.872069 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:33:05.872074 | orchestrator | 2026-03-18 02:33:05.872079 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:33:05.872085 | orchestrator | Wednesday 18 March 2026 02:33:00 +0000 (0:00:01.379) 0:00:56.948 ******* 2026-03-18 02:33:05.872090 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.872096 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.872105 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.872113 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.872121 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.872129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.872138 | orchestrator | 2026-03-18 02:33:05.872148 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:33:05.872157 | orchestrator | Wednesday 18 March 2026 02:33:01 +0000 (0:00:00.641) 0:00:57.590 ******* 2026-03-18 02:33:05.872166 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.872173 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.872179 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.872184 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:33:05.872189 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:33:05.872194 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:33:05.872200 | orchestrator | 2026-03-18 02:33:05.872205 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:33:05.872211 | orchestrator | Wednesday 18 March 2026 02:33:02 +0000 (0:00:00.900) 0:00:58.490 ******* 2026-03-18 02:33:05.872216 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.872221 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.872226 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.872232 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.872241 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.872246 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.872252 | orchestrator | 2026-03-18 02:33:05.872257 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:33:05.872262 | orchestrator | Wednesday 18 March 2026 02:33:02 +0000 (0:00:00.672) 0:00:59.162 ******* 2026-03-18 02:33:05.872268 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.872302 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.872309 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.872314 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.872319 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.872325 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.872330 | orchestrator | 2026-03-18 02:33:05.872335 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:33:05.872341 | orchestrator | Wednesday 18 March 2026 02:33:03 +0000 (0:00:00.936) 0:01:00.098 ******* 2026-03-18 02:33:05.872346 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:33:05.872352 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:33:05.872357 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:33:05.872362 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.872368 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.872373 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.872378 | orchestrator | 2026-03-18 02:33:05.872410 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:33:05.872416 | orchestrator | Wednesday 18 March 2026 02:33:04 +0000 (0:00:00.659) 0:01:00.757 ******* 2026-03-18 02:33:05.872421 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.872427 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:33:05.872432 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:33:05.872437 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:33:05.872442 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:33:05.872448 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:33:05.872453 | orchestrator | 2026-03-18 02:33:05.872458 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:33:05.872464 | orchestrator | Wednesday 18 March 2026 02:33:05 +0000 (0:00:01.025) 0:01:01.783 ******* 2026-03-18 02:33:05.872469 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:33:05.872479 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.026404 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.026525 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.026541 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.026552 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.026562 | orchestrator | 2026-03-18 02:34:26.026574 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:34:26.026585 | orchestrator | Wednesday 18 March 2026 02:33:06 +0000 (0:00:00.669) 0:01:02.453 ******* 2026-03-18 02:34:26.027488 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.027521 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.027538 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.027553 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:26.027569 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:26.027583 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:26.027597 | orchestrator | 2026-03-18 02:34:26.027612 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:34:26.027629 | orchestrator | Wednesday 18 March 2026 02:33:07 +0000 (0:00:00.930) 0:01:03.383 ******* 2026-03-18 02:34:26.027647 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:26.027663 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:26.027673 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:26.027683 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:26.027693 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:26.027703 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:26.027713 | orchestrator | 2026-03-18 02:34:26.027723 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:34:26.027760 | orchestrator | Wednesday 18 March 2026 02:33:07 +0000 (0:00:00.680) 0:01:04.064 ******* 2026-03-18 02:34:26.027770 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:26.027781 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:26.027798 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:26.027812 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:26.027839 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:26.027854 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:26.027869 | orchestrator | 2026-03-18 02:34:26.027885 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 02:34:26.027901 | orchestrator | Wednesday 18 March 2026 02:33:09 +0000 (0:00:01.378) 0:01:05.442 ******* 2026-03-18 02:34:26.027916 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:34:26.027931 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:34:26.027947 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:34:26.027964 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:34:26.027981 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:34:26.027998 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:34:26.028013 | orchestrator | 2026-03-18 02:34:26.028028 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 02:34:26.028043 | orchestrator | Wednesday 18 March 2026 02:33:11 +0000 (0:00:02.043) 0:01:07.486 ******* 2026-03-18 02:34:26.028058 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:34:26.028074 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:34:26.028090 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:34:26.028106 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:34:26.028122 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:34:26.028137 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:34:26.028153 | orchestrator | 2026-03-18 02:34:26.028169 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 02:34:26.028187 | orchestrator | Wednesday 18 March 2026 02:33:13 +0000 (0:00:02.070) 0:01:09.556 ******* 2026-03-18 02:34:26.028205 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:34:26.028223 | orchestrator | 2026-03-18 02:34:26.028267 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 02:34:26.028284 | orchestrator | Wednesday 18 March 2026 02:33:14 +0000 (0:00:01.549) 0:01:11.106 ******* 2026-03-18 02:34:26.028301 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.028317 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.028332 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.028348 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.028365 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.028381 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.028397 | orchestrator | 2026-03-18 02:34:26.028412 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 02:34:26.028428 | orchestrator | Wednesday 18 March 2026 02:33:15 +0000 (0:00:00.665) 0:01:11.771 ******* 2026-03-18 02:34:26.028443 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.028459 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.028475 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.028491 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.028506 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.028522 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.028537 | orchestrator | 2026-03-18 02:34:26.028554 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 02:34:26.028570 | orchestrator | Wednesday 18 March 2026 02:33:16 +0000 (0:00:00.847) 0:01:12.619 ******* 2026-03-18 02:34:26.028587 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028604 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028620 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028665 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028676 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028686 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028696 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028706 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 02:34:26.028715 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028751 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028769 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028784 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 02:34:26.028798 | orchestrator | 2026-03-18 02:34:26.028813 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 02:34:26.028828 | orchestrator | Wednesday 18 March 2026 02:33:17 +0000 (0:00:01.363) 0:01:13.983 ******* 2026-03-18 02:34:26.028844 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:34:26.028861 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:34:26.028879 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:34:26.028896 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:34:26.028914 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:34:26.028931 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:34:26.028946 | orchestrator | 2026-03-18 02:34:26.028962 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 02:34:26.028977 | orchestrator | Wednesday 18 March 2026 02:33:18 +0000 (0:00:01.234) 0:01:15.217 ******* 2026-03-18 02:34:26.028993 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.029008 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.029023 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.029040 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.029059 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.029069 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.029079 | orchestrator | 2026-03-18 02:34:26.029088 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 02:34:26.029098 | orchestrator | Wednesday 18 March 2026 02:33:19 +0000 (0:00:00.680) 0:01:15.897 ******* 2026-03-18 02:34:26.029107 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.029118 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.029134 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.029144 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.029153 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.029170 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.029186 | orchestrator | 2026-03-18 02:34:26.029202 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 02:34:26.029219 | orchestrator | Wednesday 18 March 2026 02:33:20 +0000 (0:00:00.916) 0:01:16.814 ******* 2026-03-18 02:34:26.029286 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.029306 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.029323 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.029340 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.029358 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:26.029374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:26.029391 | orchestrator | 2026-03-18 02:34:26.029402 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 02:34:26.029411 | orchestrator | Wednesday 18 March 2026 02:33:21 +0000 (0:00:00.655) 0:01:17.469 ******* 2026-03-18 02:34:26.029421 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:34:26.029442 | orchestrator | 2026-03-18 02:34:26.029452 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 02:34:26.029462 | orchestrator | Wednesday 18 March 2026 02:33:22 +0000 (0:00:01.424) 0:01:18.894 ******* 2026-03-18 02:34:26.029471 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:26.029482 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:26.029491 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:26.029501 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:26.029510 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:26.029519 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:26.029529 | orchestrator | 2026-03-18 02:34:26.029539 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 02:34:26.029548 | orchestrator | Wednesday 18 March 2026 02:34:25 +0000 (0:01:02.633) 0:02:21.528 ******* 2026-03-18 02:34:26.029558 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:26.029567 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:26.029577 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:26.029586 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:26.029596 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:26.029605 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:26.029615 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:26.029624 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:26.029634 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:26.029643 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:26.029653 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:26.029662 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:26.029680 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:26.029690 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:26.029699 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:26.029709 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:26.029718 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:26.029728 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:26.029737 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:26.029758 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.983622 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 02:34:50.983731 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 02:34:50.983744 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 02:34:50.983754 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.983763 | orchestrator | 2026-03-18 02:34:50.983773 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 02:34:50.983782 | orchestrator | Wednesday 18 March 2026 02:34:26 +0000 (0:00:00.742) 0:02:22.270 ******* 2026-03-18 02:34:50.983791 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.983800 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.983808 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.983817 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.983826 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.983834 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.983843 | orchestrator | 2026-03-18 02:34:50.983852 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 02:34:50.983878 | orchestrator | Wednesday 18 March 2026 02:34:26 +0000 (0:00:00.896) 0:02:23.166 ******* 2026-03-18 02:34:50.983888 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.983896 | orchestrator | 2026-03-18 02:34:50.983905 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 02:34:50.983913 | orchestrator | Wednesday 18 March 2026 02:34:27 +0000 (0:00:00.173) 0:02:23.340 ******* 2026-03-18 02:34:50.983921 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.983930 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.983938 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.983947 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.983955 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.983964 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.983972 | orchestrator | 2026-03-18 02:34:50.983981 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 02:34:50.983989 | orchestrator | Wednesday 18 March 2026 02:34:27 +0000 (0:00:00.725) 0:02:24.065 ******* 2026-03-18 02:34:50.983997 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984006 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984014 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984022 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984031 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984039 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984047 | orchestrator | 2026-03-18 02:34:50.984056 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 02:34:50.984064 | orchestrator | Wednesday 18 March 2026 02:34:28 +0000 (0:00:00.897) 0:02:24.963 ******* 2026-03-18 02:34:50.984073 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984081 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984090 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984098 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984107 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984116 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984124 | orchestrator | 2026-03-18 02:34:50.984133 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 02:34:50.984141 | orchestrator | Wednesday 18 March 2026 02:34:29 +0000 (0:00:00.673) 0:02:25.636 ******* 2026-03-18 02:34:50.984150 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:50.984159 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:50.984169 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:50.984179 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:50.984188 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:50.984198 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:50.984207 | orchestrator | 2026-03-18 02:34:50.984217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 02:34:50.984245 | orchestrator | Wednesday 18 March 2026 02:34:33 +0000 (0:00:03.638) 0:02:29.275 ******* 2026-03-18 02:34:50.984256 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:50.984265 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:50.984275 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:50.984284 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:50.984292 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:50.984300 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:50.984309 | orchestrator | 2026-03-18 02:34:50.984317 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 02:34:50.984326 | orchestrator | Wednesday 18 March 2026 02:34:33 +0000 (0:00:00.699) 0:02:29.974 ******* 2026-03-18 02:34:50.984335 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:34:50.984345 | orchestrator | 2026-03-18 02:34:50.984354 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 02:34:50.984362 | orchestrator | Wednesday 18 March 2026 02:34:35 +0000 (0:00:01.456) 0:02:31.431 ******* 2026-03-18 02:34:50.984377 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984386 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984394 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984402 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984411 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984419 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984428 | orchestrator | 2026-03-18 02:34:50.984450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 02:34:50.984459 | orchestrator | Wednesday 18 March 2026 02:34:36 +0000 (0:00:00.898) 0:02:32.329 ******* 2026-03-18 02:34:50.984467 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984476 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984484 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984493 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984501 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984509 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984518 | orchestrator | 2026-03-18 02:34:50.984526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 02:34:50.984535 | orchestrator | Wednesday 18 March 2026 02:34:36 +0000 (0:00:00.712) 0:02:33.042 ******* 2026-03-18 02:34:50.984543 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984566 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984576 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984584 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984593 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984601 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984609 | orchestrator | 2026-03-18 02:34:50.984618 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 02:34:50.984627 | orchestrator | Wednesday 18 March 2026 02:34:37 +0000 (0:00:00.992) 0:02:34.035 ******* 2026-03-18 02:34:50.984635 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984644 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984652 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984661 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984669 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984678 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984686 | orchestrator | 2026-03-18 02:34:50.984695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 02:34:50.984703 | orchestrator | Wednesday 18 March 2026 02:34:38 +0000 (0:00:00.639) 0:02:34.675 ******* 2026-03-18 02:34:50.984712 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984720 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984729 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984737 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984745 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984754 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984762 | orchestrator | 2026-03-18 02:34:50.984771 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 02:34:50.984779 | orchestrator | Wednesday 18 March 2026 02:34:39 +0000 (0:00:01.006) 0:02:35.681 ******* 2026-03-18 02:34:50.984788 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984796 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984805 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984813 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984822 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984830 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984838 | orchestrator | 2026-03-18 02:34:50.984847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 02:34:50.984855 | orchestrator | Wednesday 18 March 2026 02:34:40 +0000 (0:00:00.664) 0:02:36.345 ******* 2026-03-18 02:34:50.984864 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984872 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984887 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984896 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984905 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984913 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984921 | orchestrator | 2026-03-18 02:34:50.984930 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 02:34:50.984939 | orchestrator | Wednesday 18 March 2026 02:34:41 +0000 (0:00:00.949) 0:02:37.294 ******* 2026-03-18 02:34:50.984947 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:50.984955 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:50.984964 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:50.984972 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:50.984980 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:50.984989 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:50.984997 | orchestrator | 2026-03-18 02:34:50.985018 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 02:34:50.985036 | orchestrator | Wednesday 18 March 2026 02:34:41 +0000 (0:00:00.656) 0:02:37.951 ******* 2026-03-18 02:34:50.985045 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:50.985053 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:50.985062 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:50.985070 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:34:50.985079 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:34:50.985087 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:34:50.985095 | orchestrator | 2026-03-18 02:34:50.985104 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 02:34:50.985113 | orchestrator | Wednesday 18 March 2026 02:34:43 +0000 (0:00:01.405) 0:02:39.357 ******* 2026-03-18 02:34:50.985122 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:34:50.985132 | orchestrator | 2026-03-18 02:34:50.985141 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 02:34:50.985149 | orchestrator | Wednesday 18 March 2026 02:34:44 +0000 (0:00:01.323) 0:02:40.680 ******* 2026-03-18 02:34:50.985158 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-18 02:34:50.985167 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985176 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-18 02:34:50.985185 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-18 02:34:50.985193 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-18 02:34:50.985202 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:50.985210 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-18 02:34:50.985219 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985289 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985300 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-18 02:34:50.985308 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985317 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:50.985325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:50.985334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:50.985351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-18 02:34:50.985360 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:50.985374 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.682824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:56.682910 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:56.682920 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:56.682951 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-18 02:34:56.682963 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:56.682974 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.682985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.682995 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:56.683007 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.683018 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-18 02:34:56.683029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.683038 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683046 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.683052 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.683058 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.683065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-18 02:34:56.683071 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.683077 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683083 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683090 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.683096 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683102 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-18 02:34:56.683108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683114 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683120 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683127 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683133 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-18 02:34:56.683145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683151 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683157 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683163 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683169 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683175 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683181 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-18 02:34:56.683187 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683193 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683199 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683208 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683217 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683269 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 02:34:56.683277 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683284 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683290 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683303 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683315 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 02:34:56.683322 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683328 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683334 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 02:34:56.683372 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683386 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683393 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683400 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683420 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 02:34:56.683428 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-18 02:34:56.683436 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683444 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683452 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683459 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683466 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 02:34:56.683474 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-18 02:34:56.683482 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-18 02:34:56.683489 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-18 02:34:56.683497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683504 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-18 02:34:56.683512 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 02:34:56.683519 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-18 02:34:56.683525 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-18 02:34:56.683532 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-18 02:34:56.683538 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-18 02:34:56.683544 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-18 02:34:56.683551 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-18 02:34:56.683557 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-18 02:34:56.683564 | orchestrator | 2026-03-18 02:34:56.683571 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 02:34:56.683578 | orchestrator | Wednesday 18 March 2026 02:34:50 +0000 (0:00:06.530) 0:02:47.210 ******* 2026-03-18 02:34:56.683584 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:56.683591 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:56.683598 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:56.683605 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:34:56.683613 | orchestrator | 2026-03-18 02:34:56.683619 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 02:34:56.683631 | orchestrator | Wednesday 18 March 2026 02:34:52 +0000 (0:00:01.125) 0:02:48.336 ******* 2026-03-18 02:34:56.683637 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683644 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683651 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683657 | orchestrator | 2026-03-18 02:34:56.683664 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 02:34:56.683671 | orchestrator | Wednesday 18 March 2026 02:34:52 +0000 (0:00:00.720) 0:02:49.056 ******* 2026-03-18 02:34:56.683677 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683683 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683690 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 02:34:56.683696 | orchestrator | 2026-03-18 02:34:56.683703 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 02:34:56.683709 | orchestrator | Wednesday 18 March 2026 02:34:54 +0000 (0:00:01.279) 0:02:50.335 ******* 2026-03-18 02:34:56.683716 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:56.683722 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:56.683729 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:56.683735 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:56.683742 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:56.683748 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:56.683754 | orchestrator | 2026-03-18 02:34:56.683761 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 02:34:56.683767 | orchestrator | Wednesday 18 March 2026 02:34:54 +0000 (0:00:00.921) 0:02:51.256 ******* 2026-03-18 02:34:56.683774 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:34:56.683784 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:34:56.683791 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:34:56.683797 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:56.683804 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:56.683810 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:56.683816 | orchestrator | 2026-03-18 02:34:56.683823 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 02:34:56.683829 | orchestrator | Wednesday 18 March 2026 02:34:55 +0000 (0:00:00.739) 0:02:51.996 ******* 2026-03-18 02:34:56.683836 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:34:56.683842 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:34:56.683849 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:34:56.683855 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:34:56.683862 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:34:56.683868 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:34:56.683875 | orchestrator | 2026-03-18 02:34:56.683885 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 02:35:10.904085 | orchestrator | Wednesday 18 March 2026 02:34:56 +0000 (0:00:00.928) 0:02:52.924 ******* 2026-03-18 02:35:10.904315 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.904337 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.904350 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.904361 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904372 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904383 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904395 | orchestrator | 2026-03-18 02:35:10.904407 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 02:35:10.904447 | orchestrator | Wednesday 18 March 2026 02:34:57 +0000 (0:00:00.667) 0:02:53.592 ******* 2026-03-18 02:35:10.904458 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.904470 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.904480 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.904491 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904502 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904526 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904538 | orchestrator | 2026-03-18 02:35:10.904560 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 02:35:10.904574 | orchestrator | Wednesday 18 March 2026 02:34:58 +0000 (0:00:00.895) 0:02:54.487 ******* 2026-03-18 02:35:10.904587 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.904599 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.904611 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.904624 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904635 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904648 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904660 | orchestrator | 2026-03-18 02:35:10.904673 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 02:35:10.904686 | orchestrator | Wednesday 18 March 2026 02:34:58 +0000 (0:00:00.666) 0:02:55.153 ******* 2026-03-18 02:35:10.904699 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.904711 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.904724 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.904734 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904745 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904755 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904766 | orchestrator | 2026-03-18 02:35:10.904777 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 02:35:10.904787 | orchestrator | Wednesday 18 March 2026 02:34:59 +0000 (0:00:00.941) 0:02:56.094 ******* 2026-03-18 02:35:10.904798 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.904809 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.904819 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.904830 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904840 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904851 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904861 | orchestrator | 2026-03-18 02:35:10.904872 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 02:35:10.904883 | orchestrator | Wednesday 18 March 2026 02:35:00 +0000 (0:00:00.643) 0:02:56.737 ******* 2026-03-18 02:35:10.904894 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.904905 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.904916 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.904927 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:10.904940 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:10.904950 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:10.904961 | orchestrator | 2026-03-18 02:35:10.904972 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 02:35:10.904983 | orchestrator | Wednesday 18 March 2026 02:35:03 +0000 (0:00:02.972) 0:02:59.710 ******* 2026-03-18 02:35:10.904994 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:10.905004 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:10.905015 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:10.905025 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905036 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905046 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905057 | orchestrator | 2026-03-18 02:35:10.905068 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 02:35:10.905079 | orchestrator | Wednesday 18 March 2026 02:35:04 +0000 (0:00:00.674) 0:03:00.385 ******* 2026-03-18 02:35:10.905098 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:10.905109 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:10.905119 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:10.905130 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905141 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905151 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905162 | orchestrator | 2026-03-18 02:35:10.905173 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 02:35:10.905184 | orchestrator | Wednesday 18 March 2026 02:35:05 +0000 (0:00:01.011) 0:03:01.396 ******* 2026-03-18 02:35:10.905194 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905205 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905216 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905245 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905256 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905287 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905298 | orchestrator | 2026-03-18 02:35:10.905310 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 02:35:10.905321 | orchestrator | Wednesday 18 March 2026 02:35:05 +0000 (0:00:00.680) 0:03:02.076 ******* 2026-03-18 02:35:10.905332 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 02:35:10.905344 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 02:35:10.905355 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 02:35:10.905366 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905402 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905413 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905424 | orchestrator | 2026-03-18 02:35:10.905435 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 02:35:10.905446 | orchestrator | Wednesday 18 March 2026 02:35:06 +0000 (0:00:00.922) 0:03:02.998 ******* 2026-03-18 02:35:10.905460 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-18 02:35:10.905475 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-18 02:35:10.905488 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905500 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-18 02:35:10.905511 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-18 02:35:10.905522 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905533 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-18 02:35:10.905552 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-18 02:35:10.905563 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905574 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905585 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905595 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905606 | orchestrator | 2026-03-18 02:35:10.905617 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 02:35:10.905642 | orchestrator | Wednesday 18 March 2026 02:35:07 +0000 (0:00:00.716) 0:03:03.715 ******* 2026-03-18 02:35:10.905663 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905674 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905685 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905695 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905706 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905716 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905727 | orchestrator | 2026-03-18 02:35:10.905738 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 02:35:10.905749 | orchestrator | Wednesday 18 March 2026 02:35:08 +0000 (0:00:00.948) 0:03:04.664 ******* 2026-03-18 02:35:10.905760 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905770 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905781 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905791 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905802 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905813 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905823 | orchestrator | 2026-03-18 02:35:10.905834 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 02:35:10.905845 | orchestrator | Wednesday 18 March 2026 02:35:09 +0000 (0:00:00.598) 0:03:05.263 ******* 2026-03-18 02:35:10.905856 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905867 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905883 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905894 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.905905 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.905916 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.905926 | orchestrator | 2026-03-18 02:35:10.905937 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 02:35:10.905948 | orchestrator | Wednesday 18 March 2026 02:35:09 +0000 (0:00:00.963) 0:03:06.227 ******* 2026-03-18 02:35:10.905959 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:10.905970 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:10.905981 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:10.905991 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:10.906002 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:10.906012 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:10.906096 | orchestrator | 2026-03-18 02:35:10.906107 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 02:35:10.906127 | orchestrator | Wednesday 18 March 2026 02:35:10 +0000 (0:00:00.916) 0:03:07.143 ******* 2026-03-18 02:35:29.277673 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.277797 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:29.277814 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:29.277825 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.277837 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:29.277848 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:29.277866 | orchestrator | 2026-03-18 02:35:29.277889 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 02:35:29.277939 | orchestrator | Wednesday 18 March 2026 02:35:11 +0000 (0:00:00.686) 0:03:07.829 ******* 2026-03-18 02:35:29.277961 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:29.277984 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:29.277998 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:29.278008 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.278082 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:29.278094 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:29.278105 | orchestrator | 2026-03-18 02:35:29.278116 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 02:35:29.278127 | orchestrator | Wednesday 18 March 2026 02:35:12 +0000 (0:00:00.967) 0:03:08.797 ******* 2026-03-18 02:35:29.278138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:29.278150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:29.278161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:29.278171 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.278185 | orchestrator | 2026-03-18 02:35:29.278198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 02:35:29.278210 | orchestrator | Wednesday 18 March 2026 02:35:12 +0000 (0:00:00.420) 0:03:09.218 ******* 2026-03-18 02:35:29.278250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:29.278262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:29.278274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:29.278287 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.278300 | orchestrator | 2026-03-18 02:35:29.278312 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 02:35:29.278324 | orchestrator | Wednesday 18 March 2026 02:35:13 +0000 (0:00:00.444) 0:03:09.662 ******* 2026-03-18 02:35:29.278336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:29.278349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:29.278362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:29.278374 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.278386 | orchestrator | 2026-03-18 02:35:29.278399 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 02:35:29.278411 | orchestrator | Wednesday 18 March 2026 02:35:13 +0000 (0:00:00.451) 0:03:10.113 ******* 2026-03-18 02:35:29.278424 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:29.278436 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:29.278448 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:29.278460 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.278473 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:29.278485 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:29.278497 | orchestrator | 2026-03-18 02:35:29.278508 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 02:35:29.278519 | orchestrator | Wednesday 18 March 2026 02:35:14 +0000 (0:00:00.662) 0:03:10.776 ******* 2026-03-18 02:35:29.278530 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 02:35:29.278541 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 02:35:29.278552 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 02:35:29.278563 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-18 02:35:29.278574 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.278584 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-18 02:35:29.278595 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:29.278606 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-18 02:35:29.278617 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:29.278627 | orchestrator | 2026-03-18 02:35:29.278638 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 02:35:29.278649 | orchestrator | Wednesday 18 March 2026 02:35:16 +0000 (0:00:01.871) 0:03:12.647 ******* 2026-03-18 02:35:29.278660 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:35:29.278678 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:35:29.278689 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:35:29.278700 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:35:29.278710 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:35:29.278721 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:35:29.278731 | orchestrator | 2026-03-18 02:35:29.278743 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:35:29.278754 | orchestrator | Wednesday 18 March 2026 02:35:19 +0000 (0:00:02.758) 0:03:15.406 ******* 2026-03-18 02:35:29.278764 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:35:29.278775 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:35:29.278786 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:35:29.278796 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:35:29.278823 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:35:29.278834 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:35:29.278845 | orchestrator | 2026-03-18 02:35:29.278856 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-18 02:35:29.278867 | orchestrator | Wednesday 18 March 2026 02:35:20 +0000 (0:00:01.049) 0:03:16.455 ******* 2026-03-18 02:35:29.278877 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.278888 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:29.278899 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:29.278910 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:35:29.278922 | orchestrator | 2026-03-18 02:35:29.278933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-18 02:35:29.278944 | orchestrator | Wednesday 18 March 2026 02:35:21 +0000 (0:00:01.205) 0:03:17.661 ******* 2026-03-18 02:35:29.278954 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:29.278985 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:29.278998 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:29.279009 | orchestrator | 2026-03-18 02:35:29.279020 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-18 02:35:29.279031 | orchestrator | Wednesday 18 March 2026 02:35:21 +0000 (0:00:00.363) 0:03:18.025 ******* 2026-03-18 02:35:29.279042 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:35:29.279052 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:35:29.279063 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:35:29.279074 | orchestrator | 2026-03-18 02:35:29.279085 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-18 02:35:29.279096 | orchestrator | Wednesday 18 March 2026 02:35:23 +0000 (0:00:01.527) 0:03:19.553 ******* 2026-03-18 02:35:29.279107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 02:35:29.279118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 02:35:29.279129 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 02:35:29.279139 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.279150 | orchestrator | 2026-03-18 02:35:29.279161 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-18 02:35:29.279172 | orchestrator | Wednesday 18 March 2026 02:35:23 +0000 (0:00:00.677) 0:03:20.230 ******* 2026-03-18 02:35:29.279183 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:29.279194 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:29.279205 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:29.279256 | orchestrator | 2026-03-18 02:35:29.279268 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-18 02:35:29.279279 | orchestrator | Wednesday 18 March 2026 02:35:24 +0000 (0:00:00.368) 0:03:20.599 ******* 2026-03-18 02:35:29.279290 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:29.279301 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:29.279312 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:29.279323 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:35:29.279342 | orchestrator | 2026-03-18 02:35:29.279353 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-18 02:35:29.279364 | orchestrator | Wednesday 18 March 2026 02:35:25 +0000 (0:00:01.149) 0:03:21.748 ******* 2026-03-18 02:35:29.279375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:29.279385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:29.279396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:29.279407 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279418 | orchestrator | 2026-03-18 02:35:29.279428 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-18 02:35:29.279439 | orchestrator | Wednesday 18 March 2026 02:35:25 +0000 (0:00:00.458) 0:03:22.207 ******* 2026-03-18 02:35:29.279450 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279461 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:29.279472 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:29.279483 | orchestrator | 2026-03-18 02:35:29.279494 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-18 02:35:29.279504 | orchestrator | Wednesday 18 March 2026 02:35:26 +0000 (0:00:00.345) 0:03:22.552 ******* 2026-03-18 02:35:29.279515 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279526 | orchestrator | 2026-03-18 02:35:29.279537 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-18 02:35:29.279547 | orchestrator | Wednesday 18 March 2026 02:35:26 +0000 (0:00:00.235) 0:03:22.788 ******* 2026-03-18 02:35:29.279558 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279569 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:29.279579 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:29.279590 | orchestrator | 2026-03-18 02:35:29.279601 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-18 02:35:29.279612 | orchestrator | Wednesday 18 March 2026 02:35:26 +0000 (0:00:00.325) 0:03:23.113 ******* 2026-03-18 02:35:29.279622 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279633 | orchestrator | 2026-03-18 02:35:29.279644 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-18 02:35:29.279655 | orchestrator | Wednesday 18 March 2026 02:35:27 +0000 (0:00:00.819) 0:03:23.933 ******* 2026-03-18 02:35:29.279665 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279676 | orchestrator | 2026-03-18 02:35:29.279687 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-18 02:35:29.279698 | orchestrator | Wednesday 18 March 2026 02:35:27 +0000 (0:00:00.245) 0:03:24.179 ******* 2026-03-18 02:35:29.279708 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279719 | orchestrator | 2026-03-18 02:35:29.279729 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-18 02:35:29.279740 | orchestrator | Wednesday 18 March 2026 02:35:28 +0000 (0:00:00.158) 0:03:24.337 ******* 2026-03-18 02:35:29.279751 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279762 | orchestrator | 2026-03-18 02:35:29.279772 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-18 02:35:29.279788 | orchestrator | Wednesday 18 March 2026 02:35:28 +0000 (0:00:00.262) 0:03:24.600 ******* 2026-03-18 02:35:29.279799 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279810 | orchestrator | 2026-03-18 02:35:29.279821 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-18 02:35:29.279832 | orchestrator | Wednesday 18 March 2026 02:35:28 +0000 (0:00:00.259) 0:03:24.859 ******* 2026-03-18 02:35:29.279843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:29.279854 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:29.279865 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:29.279876 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:29.279887 | orchestrator | 2026-03-18 02:35:29.279898 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-18 02:35:29.279915 | orchestrator | Wednesday 18 March 2026 02:35:29 +0000 (0:00:00.448) 0:03:25.307 ******* 2026-03-18 02:35:29.279934 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.927042 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:48.927165 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:48.927181 | orchestrator | 2026-03-18 02:35:48.927193 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-18 02:35:48.927205 | orchestrator | Wednesday 18 March 2026 02:35:29 +0000 (0:00:00.339) 0:03:25.647 ******* 2026-03-18 02:35:48.927249 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.927259 | orchestrator | 2026-03-18 02:35:48.927269 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-18 02:35:48.927279 | orchestrator | Wednesday 18 March 2026 02:35:29 +0000 (0:00:00.237) 0:03:25.885 ******* 2026-03-18 02:35:48.927289 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.927298 | orchestrator | 2026-03-18 02:35:48.927308 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-18 02:35:48.927318 | orchestrator | Wednesday 18 March 2026 02:35:29 +0000 (0:00:00.248) 0:03:26.133 ******* 2026-03-18 02:35:48.927328 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.927338 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.927347 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.927358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:35:48.927368 | orchestrator | 2026-03-18 02:35:48.927378 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-18 02:35:48.927389 | orchestrator | Wednesday 18 March 2026 02:35:31 +0000 (0:00:01.190) 0:03:27.324 ******* 2026-03-18 02:35:48.927406 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:48.927423 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:48.927440 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:48.927456 | orchestrator | 2026-03-18 02:35:48.927470 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-18 02:35:48.927486 | orchestrator | Wednesday 18 March 2026 02:35:31 +0000 (0:00:00.370) 0:03:27.694 ******* 2026-03-18 02:35:48.927502 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:35:48.927517 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:35:48.927534 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:35:48.927550 | orchestrator | 2026-03-18 02:35:48.927565 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-18 02:35:48.927581 | orchestrator | Wednesday 18 March 2026 02:35:32 +0000 (0:00:01.520) 0:03:29.215 ******* 2026-03-18 02:35:48.927597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:48.927614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:48.927630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:48.927646 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.927663 | orchestrator | 2026-03-18 02:35:48.927680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-18 02:35:48.927697 | orchestrator | Wednesday 18 March 2026 02:35:33 +0000 (0:00:00.699) 0:03:29.915 ******* 2026-03-18 02:35:48.927713 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:48.927730 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:48.927747 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:48.927763 | orchestrator | 2026-03-18 02:35:48.927780 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-18 02:35:48.927796 | orchestrator | Wednesday 18 March 2026 02:35:34 +0000 (0:00:00.392) 0:03:30.308 ******* 2026-03-18 02:35:48.927811 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.927827 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.927842 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.927858 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:35:48.927907 | orchestrator | 2026-03-18 02:35:48.927925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-18 02:35:48.927941 | orchestrator | Wednesday 18 March 2026 02:35:35 +0000 (0:00:01.177) 0:03:31.486 ******* 2026-03-18 02:35:48.927958 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:48.927975 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:48.927991 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:48.928006 | orchestrator | 2026-03-18 02:35:48.928023 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-18 02:35:48.928039 | orchestrator | Wednesday 18 March 2026 02:35:35 +0000 (0:00:00.403) 0:03:31.889 ******* 2026-03-18 02:35:48.928056 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:35:48.928073 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:35:48.928090 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:35:48.928106 | orchestrator | 2026-03-18 02:35:48.928122 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-18 02:35:48.928138 | orchestrator | Wednesday 18 March 2026 02:35:36 +0000 (0:00:01.221) 0:03:33.111 ******* 2026-03-18 02:35:48.928155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:35:48.928171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:35:48.928186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:35:48.928201 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.928270 | orchestrator | 2026-03-18 02:35:48.928306 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-18 02:35:48.928322 | orchestrator | Wednesday 18 March 2026 02:35:37 +0000 (0:00:00.952) 0:03:34.063 ******* 2026-03-18 02:35:48.928337 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:35:48.928354 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:35:48.928369 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:35:48.928385 | orchestrator | 2026-03-18 02:35:48.928401 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-18 02:35:48.928417 | orchestrator | Wednesday 18 March 2026 02:35:38 +0000 (0:00:00.608) 0:03:34.672 ******* 2026-03-18 02:35:48.928434 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.928451 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:48.928468 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:48.928485 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.928502 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.928519 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.928535 | orchestrator | 2026-03-18 02:35:48.928578 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-18 02:35:48.928598 | orchestrator | Wednesday 18 March 2026 02:35:39 +0000 (0:00:00.653) 0:03:35.326 ******* 2026-03-18 02:35:48.928616 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:35:48.928633 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:35:48.928650 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:35:48.928667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:35:48.928686 | orchestrator | 2026-03-18 02:35:48.928703 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-18 02:35:48.928720 | orchestrator | Wednesday 18 March 2026 02:35:40 +0000 (0:00:01.189) 0:03:36.516 ******* 2026-03-18 02:35:48.928736 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:48.928751 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:48.928767 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:48.928783 | orchestrator | 2026-03-18 02:35:48.928799 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-18 02:35:48.928815 | orchestrator | Wednesday 18 March 2026 02:35:40 +0000 (0:00:00.390) 0:03:36.906 ******* 2026-03-18 02:35:48.928830 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:35:48.928846 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:35:48.928863 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:35:48.928898 | orchestrator | 2026-03-18 02:35:48.928913 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-18 02:35:48.928929 | orchestrator | Wednesday 18 March 2026 02:35:41 +0000 (0:00:01.197) 0:03:38.103 ******* 2026-03-18 02:35:48.928945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 02:35:48.928961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 02:35:48.928977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 02:35:48.928993 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929009 | orchestrator | 2026-03-18 02:35:48.929024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-18 02:35:48.929040 | orchestrator | Wednesday 18 March 2026 02:35:42 +0000 (0:00:01.146) 0:03:39.250 ******* 2026-03-18 02:35:48.929057 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:48.929072 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:48.929087 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:48.929104 | orchestrator | 2026-03-18 02:35:48.929121 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-18 02:35:48.929137 | orchestrator | 2026-03-18 02:35:48.929153 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:35:48.929169 | orchestrator | Wednesday 18 March 2026 02:35:43 +0000 (0:00:00.639) 0:03:39.890 ******* 2026-03-18 02:35:48.929187 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:35:48.929204 | orchestrator | 2026-03-18 02:35:48.929251 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:35:48.929268 | orchestrator | Wednesday 18 March 2026 02:35:44 +0000 (0:00:00.830) 0:03:40.720 ******* 2026-03-18 02:35:48.929286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:35:48.929302 | orchestrator | 2026-03-18 02:35:48.929317 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:35:48.929334 | orchestrator | Wednesday 18 March 2026 02:35:45 +0000 (0:00:00.600) 0:03:41.320 ******* 2026-03-18 02:35:48.929351 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:48.929367 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:48.929382 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:48.929399 | orchestrator | 2026-03-18 02:35:48.929416 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:35:48.929432 | orchestrator | Wednesday 18 March 2026 02:35:45 +0000 (0:00:00.797) 0:03:42.118 ******* 2026-03-18 02:35:48.929448 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929465 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.929482 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.929498 | orchestrator | 2026-03-18 02:35:48.929514 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:35:48.929531 | orchestrator | Wednesday 18 March 2026 02:35:46 +0000 (0:00:00.640) 0:03:42.758 ******* 2026-03-18 02:35:48.929547 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929563 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.929579 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.929595 | orchestrator | 2026-03-18 02:35:48.929611 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:35:48.929627 | orchestrator | Wednesday 18 March 2026 02:35:46 +0000 (0:00:00.340) 0:03:43.099 ******* 2026-03-18 02:35:48.929643 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929660 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.929676 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.929691 | orchestrator | 2026-03-18 02:35:48.929708 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:35:48.929737 | orchestrator | Wednesday 18 March 2026 02:35:47 +0000 (0:00:00.348) 0:03:43.447 ******* 2026-03-18 02:35:48.929753 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:35:48.929782 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:35:48.929797 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:35:48.929813 | orchestrator | 2026-03-18 02:35:48.929829 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:35:48.929844 | orchestrator | Wednesday 18 March 2026 02:35:47 +0000 (0:00:00.736) 0:03:44.184 ******* 2026-03-18 02:35:48.929859 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929877 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.929893 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:35:48.929910 | orchestrator | 2026-03-18 02:35:48.929926 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:35:48.929942 | orchestrator | Wednesday 18 March 2026 02:35:48 +0000 (0:00:00.604) 0:03:44.788 ******* 2026-03-18 02:35:48.929958 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:35:48.929973 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:35:48.930004 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.709829 | orchestrator | 2026-03-18 02:36:11.709971 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:36:11.709986 | orchestrator | Wednesday 18 March 2026 02:35:48 +0000 (0:00:00.380) 0:03:45.169 ******* 2026-03-18 02:36:11.709995 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710005 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710065 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710075 | orchestrator | 2026-03-18 02:36:11.710084 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:36:11.710092 | orchestrator | Wednesday 18 March 2026 02:35:49 +0000 (0:00:00.804) 0:03:45.973 ******* 2026-03-18 02:36:11.710101 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710109 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710117 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710125 | orchestrator | 2026-03-18 02:36:11.710133 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:36:11.710142 | orchestrator | Wednesday 18 March 2026 02:35:50 +0000 (0:00:00.818) 0:03:46.792 ******* 2026-03-18 02:36:11.710150 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710160 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710168 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710176 | orchestrator | 2026-03-18 02:36:11.710184 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:36:11.710192 | orchestrator | Wednesday 18 March 2026 02:35:51 +0000 (0:00:00.627) 0:03:47.420 ******* 2026-03-18 02:36:11.710200 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710293 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710305 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710313 | orchestrator | 2026-03-18 02:36:11.710323 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:36:11.710333 | orchestrator | Wednesday 18 March 2026 02:35:51 +0000 (0:00:00.404) 0:03:47.824 ******* 2026-03-18 02:36:11.710341 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710351 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710359 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710368 | orchestrator | 2026-03-18 02:36:11.710377 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:36:11.710386 | orchestrator | Wednesday 18 March 2026 02:35:51 +0000 (0:00:00.362) 0:03:48.187 ******* 2026-03-18 02:36:11.710395 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710404 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710413 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710422 | orchestrator | 2026-03-18 02:36:11.710431 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:36:11.710439 | orchestrator | Wednesday 18 March 2026 02:35:52 +0000 (0:00:00.367) 0:03:48.554 ******* 2026-03-18 02:36:11.710448 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710457 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710466 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710501 | orchestrator | 2026-03-18 02:36:11.710510 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:36:11.710519 | orchestrator | Wednesday 18 March 2026 02:35:52 +0000 (0:00:00.628) 0:03:49.182 ******* 2026-03-18 02:36:11.710528 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710537 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710546 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710554 | orchestrator | 2026-03-18 02:36:11.710564 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:36:11.710573 | orchestrator | Wednesday 18 March 2026 02:35:53 +0000 (0:00:00.455) 0:03:49.638 ******* 2026-03-18 02:36:11.710581 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710591 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:11.710600 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:11.710609 | orchestrator | 2026-03-18 02:36:11.710618 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:36:11.710626 | orchestrator | Wednesday 18 March 2026 02:35:53 +0000 (0:00:00.321) 0:03:49.960 ******* 2026-03-18 02:36:11.710634 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710642 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710649 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710657 | orchestrator | 2026-03-18 02:36:11.710665 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:36:11.710673 | orchestrator | Wednesday 18 March 2026 02:35:54 +0000 (0:00:00.416) 0:03:50.376 ******* 2026-03-18 02:36:11.710681 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710688 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710696 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710704 | orchestrator | 2026-03-18 02:36:11.710712 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:36:11.710719 | orchestrator | Wednesday 18 March 2026 02:35:54 +0000 (0:00:00.666) 0:03:51.042 ******* 2026-03-18 02:36:11.710727 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710735 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710743 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710750 | orchestrator | 2026-03-18 02:36:11.710758 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-18 02:36:11.710766 | orchestrator | Wednesday 18 March 2026 02:35:55 +0000 (0:00:00.611) 0:03:51.653 ******* 2026-03-18 02:36:11.710790 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710798 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710806 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.710814 | orchestrator | 2026-03-18 02:36:11.710822 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-18 02:36:11.710830 | orchestrator | Wednesday 18 March 2026 02:35:55 +0000 (0:00:00.347) 0:03:52.001 ******* 2026-03-18 02:36:11.710839 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:36:11.710847 | orchestrator | 2026-03-18 02:36:11.710855 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-18 02:36:11.710863 | orchestrator | Wednesday 18 March 2026 02:35:56 +0000 (0:00:00.885) 0:03:52.886 ******* 2026-03-18 02:36:11.710871 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:11.710879 | orchestrator | 2026-03-18 02:36:11.710886 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-18 02:36:11.710915 | orchestrator | Wednesday 18 March 2026 02:35:56 +0000 (0:00:00.186) 0:03:53.073 ******* 2026-03-18 02:36:11.710924 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 02:36:11.710932 | orchestrator | 2026-03-18 02:36:11.710940 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-18 02:36:11.710947 | orchestrator | Wednesday 18 March 2026 02:35:57 +0000 (0:00:01.061) 0:03:54.134 ******* 2026-03-18 02:36:11.710955 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.710965 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.710978 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.711003 | orchestrator | 2026-03-18 02:36:11.711020 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-18 02:36:11.711033 | orchestrator | Wednesday 18 March 2026 02:35:58 +0000 (0:00:00.379) 0:03:54.514 ******* 2026-03-18 02:36:11.711047 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.711060 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.711074 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.711087 | orchestrator | 2026-03-18 02:36:11.711100 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-18 02:36:11.711110 | orchestrator | Wednesday 18 March 2026 02:35:58 +0000 (0:00:00.661) 0:03:55.176 ******* 2026-03-18 02:36:11.711118 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711126 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:11.711134 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:11.711142 | orchestrator | 2026-03-18 02:36:11.711150 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-18 02:36:11.711158 | orchestrator | Wednesday 18 March 2026 02:36:00 +0000 (0:00:01.276) 0:03:56.452 ******* 2026-03-18 02:36:11.711165 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711173 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:11.711181 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:11.711189 | orchestrator | 2026-03-18 02:36:11.711197 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-18 02:36:11.711204 | orchestrator | Wednesday 18 March 2026 02:36:00 +0000 (0:00:00.791) 0:03:57.244 ******* 2026-03-18 02:36:11.711240 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711248 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:11.711256 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:11.711263 | orchestrator | 2026-03-18 02:36:11.711271 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-18 02:36:11.711279 | orchestrator | Wednesday 18 March 2026 02:36:01 +0000 (0:00:00.715) 0:03:57.960 ******* 2026-03-18 02:36:11.711287 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.711295 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.711302 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.711310 | orchestrator | 2026-03-18 02:36:11.711318 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-18 02:36:11.711326 | orchestrator | Wednesday 18 March 2026 02:36:02 +0000 (0:00:01.019) 0:03:58.980 ******* 2026-03-18 02:36:11.711333 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711341 | orchestrator | 2026-03-18 02:36:11.711349 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-18 02:36:11.711357 | orchestrator | Wednesday 18 March 2026 02:36:04 +0000 (0:00:01.290) 0:04:00.271 ******* 2026-03-18 02:36:11.711365 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.711372 | orchestrator | 2026-03-18 02:36:11.711380 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-18 02:36:11.711388 | orchestrator | Wednesday 18 March 2026 02:36:04 +0000 (0:00:00.735) 0:04:01.007 ******* 2026-03-18 02:36:11.711396 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:36:11.711404 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:36:11.711412 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:36:11.711420 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:36:11.711427 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-18 02:36:11.711436 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:36:11.711443 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:36:11.711451 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-18 02:36:11.711459 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:36:11.711467 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-18 02:36:11.711482 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-18 02:36:11.711490 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-18 02:36:11.711498 | orchestrator | 2026-03-18 02:36:11.711505 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-18 02:36:11.711513 | orchestrator | Wednesday 18 March 2026 02:36:07 +0000 (0:00:03.136) 0:04:04.143 ******* 2026-03-18 02:36:11.711521 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711528 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:11.711536 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:11.711544 | orchestrator | 2026-03-18 02:36:11.711552 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-18 02:36:11.711566 | orchestrator | Wednesday 18 March 2026 02:36:09 +0000 (0:00:01.243) 0:04:05.387 ******* 2026-03-18 02:36:11.711574 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.711582 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.711590 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.711598 | orchestrator | 2026-03-18 02:36:11.711605 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-18 02:36:11.711613 | orchestrator | Wednesday 18 March 2026 02:36:09 +0000 (0:00:00.627) 0:04:06.015 ******* 2026-03-18 02:36:11.711621 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:11.711629 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:11.711636 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:11.711644 | orchestrator | 2026-03-18 02:36:11.711652 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-18 02:36:11.711660 | orchestrator | Wednesday 18 March 2026 02:36:10 +0000 (0:00:00.374) 0:04:06.389 ******* 2026-03-18 02:36:11.711667 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:11.711675 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:11.711683 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:11.711691 | orchestrator | 2026-03-18 02:36:11.711706 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-18 02:36:52.926182 | orchestrator | Wednesday 18 March 2026 02:36:11 +0000 (0:00:01.562) 0:04:07.952 ******* 2026-03-18 02:36:52.926393 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:52.926412 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:52.926424 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:52.926436 | orchestrator | 2026-03-18 02:36:52.926448 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-18 02:36:52.926459 | orchestrator | Wednesday 18 March 2026 02:36:12 +0000 (0:00:01.286) 0:04:09.238 ******* 2026-03-18 02:36:52.926470 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.926482 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.926492 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.926503 | orchestrator | 2026-03-18 02:36:52.926514 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-18 02:36:52.926525 | orchestrator | Wednesday 18 March 2026 02:36:13 +0000 (0:00:00.635) 0:04:09.874 ******* 2026-03-18 02:36:52.926537 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:36:52.926548 | orchestrator | 2026-03-18 02:36:52.926559 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-18 02:36:52.926570 | orchestrator | Wednesday 18 March 2026 02:36:14 +0000 (0:00:00.593) 0:04:10.467 ******* 2026-03-18 02:36:52.926581 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.926592 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.926603 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.926614 | orchestrator | 2026-03-18 02:36:52.926627 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-18 02:36:52.926639 | orchestrator | Wednesday 18 March 2026 02:36:14 +0000 (0:00:00.335) 0:04:10.803 ******* 2026-03-18 02:36:52.926651 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.926663 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.926676 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.926719 | orchestrator | 2026-03-18 02:36:52.926732 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-18 02:36:52.926744 | orchestrator | Wednesday 18 March 2026 02:36:15 +0000 (0:00:00.635) 0:04:11.438 ******* 2026-03-18 02:36:52.926757 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:36:52.926770 | orchestrator | 2026-03-18 02:36:52.926782 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-18 02:36:52.926794 | orchestrator | Wednesday 18 March 2026 02:36:15 +0000 (0:00:00.585) 0:04:12.023 ******* 2026-03-18 02:36:52.926806 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:52.926818 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:52.926831 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:52.926843 | orchestrator | 2026-03-18 02:36:52.926855 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-18 02:36:52.926867 | orchestrator | Wednesday 18 March 2026 02:36:17 +0000 (0:00:01.787) 0:04:13.811 ******* 2026-03-18 02:36:52.926879 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:52.926891 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:52.926904 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:52.926914 | orchestrator | 2026-03-18 02:36:52.926926 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-18 02:36:52.926946 | orchestrator | Wednesday 18 March 2026 02:36:19 +0000 (0:00:01.485) 0:04:15.296 ******* 2026-03-18 02:36:52.926972 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:52.926993 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:52.927013 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:52.927032 | orchestrator | 2026-03-18 02:36:52.927050 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-18 02:36:52.927069 | orchestrator | Wednesday 18 March 2026 02:36:20 +0000 (0:00:01.828) 0:04:17.125 ******* 2026-03-18 02:36:52.927088 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:36:52.927107 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:36:52.927128 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:36:52.927148 | orchestrator | 2026-03-18 02:36:52.927167 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-18 02:36:52.927187 | orchestrator | Wednesday 18 March 2026 02:36:22 +0000 (0:00:02.030) 0:04:19.156 ******* 2026-03-18 02:36:52.927198 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:36:52.927236 | orchestrator | 2026-03-18 02:36:52.927247 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-18 02:36:52.927258 | orchestrator | Wednesday 18 March 2026 02:36:23 +0000 (0:00:00.894) 0:04:20.050 ******* 2026-03-18 02:36:52.927269 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:52.927282 | orchestrator | 2026-03-18 02:36:52.927293 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-18 02:36:52.927304 | orchestrator | Wednesday 18 March 2026 02:36:25 +0000 (0:00:01.209) 0:04:21.259 ******* 2026-03-18 02:36:52.927331 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:52.927342 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:52.927353 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:52.927364 | orchestrator | 2026-03-18 02:36:52.927375 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-18 02:36:52.927386 | orchestrator | Wednesday 18 March 2026 02:36:34 +0000 (0:00:09.130) 0:04:30.390 ******* 2026-03-18 02:36:52.927397 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.927407 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.927418 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.927429 | orchestrator | 2026-03-18 02:36:52.927439 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-18 02:36:52.927450 | orchestrator | Wednesday 18 March 2026 02:36:34 +0000 (0:00:00.397) 0:04:30.787 ******* 2026-03-18 02:36:52.927482 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-18 02:36:52.927510 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-18 02:36:52.927523 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-18 02:36:52.927536 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-18 02:36:52.927548 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-18 02:36:52.927561 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__9fc1dcb077ed7650e9aeb05e98fed00e3d8f26cb'}])  2026-03-18 02:36:52.927573 | orchestrator | 2026-03-18 02:36:52.927584 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:36:52.927595 | orchestrator | Wednesday 18 March 2026 02:36:48 +0000 (0:00:14.438) 0:04:45.226 ******* 2026-03-18 02:36:52.927606 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.927617 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.927628 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.927639 | orchestrator | 2026-03-18 02:36:52.927650 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-18 02:36:52.927660 | orchestrator | Wednesday 18 March 2026 02:36:49 +0000 (0:00:00.406) 0:04:45.633 ******* 2026-03-18 02:36:52.927671 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:36:52.927682 | orchestrator | 2026-03-18 02:36:52.927693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-18 02:36:52.927704 | orchestrator | Wednesday 18 March 2026 02:36:50 +0000 (0:00:00.870) 0:04:46.503 ******* 2026-03-18 02:36:52.927715 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:52.927725 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:52.927736 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:52.927747 | orchestrator | 2026-03-18 02:36:52.927758 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-18 02:36:52.927769 | orchestrator | Wednesday 18 March 2026 02:36:50 +0000 (0:00:00.386) 0:04:46.890 ******* 2026-03-18 02:36:52.927780 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.927797 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:36:52.927808 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:36:52.927820 | orchestrator | 2026-03-18 02:36:52.927838 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-18 02:36:52.927862 | orchestrator | Wednesday 18 March 2026 02:36:51 +0000 (0:00:00.367) 0:04:47.257 ******* 2026-03-18 02:36:52.927882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 02:36:52.927894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 02:36:52.927905 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 02:36:52.927916 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:36:52.927926 | orchestrator | 2026-03-18 02:36:52.927937 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-18 02:36:52.927948 | orchestrator | Wednesday 18 March 2026 02:36:52 +0000 (0:00:01.006) 0:04:48.263 ******* 2026-03-18 02:36:52.927959 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:36:52.927969 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:36:52.927980 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:36:52.927990 | orchestrator | 2026-03-18 02:36:52.928001 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-18 02:36:52.928012 | orchestrator | 2026-03-18 02:36:52.928023 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:36:52.928041 | orchestrator | Wednesday 18 March 2026 02:36:52 +0000 (0:00:00.902) 0:04:49.166 ******* 2026-03-18 02:37:19.750689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:37:19.750785 | orchestrator | 2026-03-18 02:37:19.750802 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:37:19.750814 | orchestrator | Wednesday 18 March 2026 02:36:53 +0000 (0:00:00.555) 0:04:49.722 ******* 2026-03-18 02:37:19.750826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:37:19.750837 | orchestrator | 2026-03-18 02:37:19.750847 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:37:19.750880 | orchestrator | Wednesday 18 March 2026 02:36:54 +0000 (0:00:00.815) 0:04:50.537 ******* 2026-03-18 02:37:19.750890 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.750902 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.750911 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.750921 | orchestrator | 2026-03-18 02:37:19.750932 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:37:19.750943 | orchestrator | Wednesday 18 March 2026 02:36:55 +0000 (0:00:00.779) 0:04:51.316 ******* 2026-03-18 02:37:19.750954 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.750966 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.750978 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.750989 | orchestrator | 2026-03-18 02:37:19.750998 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:37:19.751005 | orchestrator | Wednesday 18 March 2026 02:36:55 +0000 (0:00:00.378) 0:04:51.695 ******* 2026-03-18 02:37:19.751011 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751017 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751024 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751030 | orchestrator | 2026-03-18 02:37:19.751036 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:37:19.751042 | orchestrator | Wednesday 18 March 2026 02:36:56 +0000 (0:00:00.595) 0:04:52.290 ******* 2026-03-18 02:37:19.751049 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751055 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751062 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751068 | orchestrator | 2026-03-18 02:37:19.751075 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:37:19.751081 | orchestrator | Wednesday 18 March 2026 02:36:56 +0000 (0:00:00.362) 0:04:52.653 ******* 2026-03-18 02:37:19.751106 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751113 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751119 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751125 | orchestrator | 2026-03-18 02:37:19.751131 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:37:19.751138 | orchestrator | Wednesday 18 March 2026 02:36:57 +0000 (0:00:00.766) 0:04:53.420 ******* 2026-03-18 02:37:19.751144 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751150 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751156 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751162 | orchestrator | 2026-03-18 02:37:19.751168 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:37:19.751175 | orchestrator | Wednesday 18 March 2026 02:36:57 +0000 (0:00:00.372) 0:04:53.792 ******* 2026-03-18 02:37:19.751181 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751192 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751274 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751284 | orchestrator | 2026-03-18 02:37:19.751295 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:37:19.751307 | orchestrator | Wednesday 18 March 2026 02:36:58 +0000 (0:00:00.627) 0:04:54.420 ******* 2026-03-18 02:37:19.751318 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751330 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751340 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751351 | orchestrator | 2026-03-18 02:37:19.751362 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:37:19.751374 | orchestrator | Wednesday 18 March 2026 02:36:59 +0000 (0:00:00.879) 0:04:55.299 ******* 2026-03-18 02:37:19.751384 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751394 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751401 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751412 | orchestrator | 2026-03-18 02:37:19.751421 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:37:19.751429 | orchestrator | Wednesday 18 March 2026 02:36:59 +0000 (0:00:00.754) 0:04:56.053 ******* 2026-03-18 02:37:19.751436 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751456 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751462 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751469 | orchestrator | 2026-03-18 02:37:19.751475 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:37:19.751481 | orchestrator | Wednesday 18 March 2026 02:37:00 +0000 (0:00:00.395) 0:04:56.449 ******* 2026-03-18 02:37:19.751502 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751508 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751514 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751520 | orchestrator | 2026-03-18 02:37:19.751527 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:37:19.751533 | orchestrator | Wednesday 18 March 2026 02:37:00 +0000 (0:00:00.663) 0:04:57.112 ******* 2026-03-18 02:37:19.751539 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751545 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751551 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751557 | orchestrator | 2026-03-18 02:37:19.751564 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:37:19.751570 | orchestrator | Wednesday 18 March 2026 02:37:01 +0000 (0:00:00.339) 0:04:57.452 ******* 2026-03-18 02:37:19.751576 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751582 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751588 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751594 | orchestrator | 2026-03-18 02:37:19.751600 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:37:19.751622 | orchestrator | Wednesday 18 March 2026 02:37:01 +0000 (0:00:00.368) 0:04:57.820 ******* 2026-03-18 02:37:19.751628 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751643 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751649 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751655 | orchestrator | 2026-03-18 02:37:19.751661 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:37:19.751667 | orchestrator | Wednesday 18 March 2026 02:37:01 +0000 (0:00:00.359) 0:04:58.180 ******* 2026-03-18 02:37:19.751673 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751679 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751686 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751692 | orchestrator | 2026-03-18 02:37:19.751698 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:37:19.751704 | orchestrator | Wednesday 18 March 2026 02:37:02 +0000 (0:00:00.637) 0:04:58.818 ******* 2026-03-18 02:37:19.751710 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751716 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751722 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751728 | orchestrator | 2026-03-18 02:37:19.751734 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:37:19.751740 | orchestrator | Wednesday 18 March 2026 02:37:02 +0000 (0:00:00.336) 0:04:59.154 ******* 2026-03-18 02:37:19.751747 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751753 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751759 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751765 | orchestrator | 2026-03-18 02:37:19.751771 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:37:19.751777 | orchestrator | Wednesday 18 March 2026 02:37:03 +0000 (0:00:00.379) 0:04:59.534 ******* 2026-03-18 02:37:19.751783 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751789 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751795 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751801 | orchestrator | 2026-03-18 02:37:19.751807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:37:19.751813 | orchestrator | Wednesday 18 March 2026 02:37:03 +0000 (0:00:00.396) 0:04:59.931 ******* 2026-03-18 02:37:19.751819 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.751825 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.751831 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.751837 | orchestrator | 2026-03-18 02:37:19.751843 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 02:37:19.751850 | orchestrator | Wednesday 18 March 2026 02:37:04 +0000 (0:00:00.904) 0:05:00.836 ******* 2026-03-18 02:37:19.751856 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 02:37:19.751863 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:37:19.751869 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:37:19.751875 | orchestrator | 2026-03-18 02:37:19.751882 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 02:37:19.751888 | orchestrator | Wednesday 18 March 2026 02:37:05 +0000 (0:00:00.682) 0:05:01.519 ******* 2026-03-18 02:37:19.751894 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:37:19.751900 | orchestrator | 2026-03-18 02:37:19.751907 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-18 02:37:19.751913 | orchestrator | Wednesday 18 March 2026 02:37:06 +0000 (0:00:00.888) 0:05:02.407 ******* 2026-03-18 02:37:19.751919 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:37:19.751925 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:37:19.751931 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:37:19.751937 | orchestrator | 2026-03-18 02:37:19.751943 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-18 02:37:19.751949 | orchestrator | Wednesday 18 March 2026 02:37:06 +0000 (0:00:00.782) 0:05:03.190 ******* 2026-03-18 02:37:19.751955 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:37:19.751962 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:37:19.751971 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:37:19.751977 | orchestrator | 2026-03-18 02:37:19.751983 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-18 02:37:19.751989 | orchestrator | Wednesday 18 March 2026 02:37:07 +0000 (0:00:00.377) 0:05:03.567 ******* 2026-03-18 02:37:19.751996 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:37:19.752002 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:37:19.752008 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:37:19.752015 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-18 02:37:19.752021 | orchestrator | 2026-03-18 02:37:19.752027 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-18 02:37:19.752033 | orchestrator | Wednesday 18 March 2026 02:37:16 +0000 (0:00:09.308) 0:05:12.876 ******* 2026-03-18 02:37:19.752039 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:37:19.752048 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:37:19.752064 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:37:19.752076 | orchestrator | 2026-03-18 02:37:19.752086 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-18 02:37:19.752097 | orchestrator | Wednesday 18 March 2026 02:37:17 +0000 (0:00:00.395) 0:05:13.271 ******* 2026-03-18 02:37:19.752108 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-18 02:37:19.752118 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 02:37:19.752130 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 02:37:19.752141 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 02:37:19.752153 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:37:19.752165 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:37:19.752176 | orchestrator | 2026-03-18 02:37:19.752184 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-18 02:37:19.752190 | orchestrator | Wednesday 18 March 2026 02:37:19 +0000 (0:00:02.434) 0:05:15.706 ******* 2026-03-18 02:37:19.752218 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-18 02:37:19.752231 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 02:38:17.014822 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 02:38:17.014921 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:38:17.014932 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-18 02:38:17.014940 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-18 02:38:17.014947 | orchestrator | 2026-03-18 02:38:17.014956 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-18 02:38:17.014965 | orchestrator | Wednesday 18 March 2026 02:37:20 +0000 (0:00:01.285) 0:05:16.991 ******* 2026-03-18 02:38:17.014972 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:38:17.014980 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:38:17.014987 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:38:17.015003 | orchestrator | 2026-03-18 02:38:17.015012 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-18 02:38:17.015019 | orchestrator | Wednesday 18 March 2026 02:37:21 +0000 (0:00:00.644) 0:05:17.636 ******* 2026-03-18 02:38:17.015027 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.015043 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:38:17.015050 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:38:17.015058 | orchestrator | 2026-03-18 02:38:17.015074 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 02:38:17.015082 | orchestrator | Wednesday 18 March 2026 02:37:21 +0000 (0:00:00.349) 0:05:17.985 ******* 2026-03-18 02:38:17.015089 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.015104 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:38:17.015112 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:38:17.015119 | orchestrator | 2026-03-18 02:38:17.015127 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 02:38:17.015152 | orchestrator | Wednesday 18 March 2026 02:37:22 +0000 (0:00:00.629) 0:05:18.614 ******* 2026-03-18 02:38:17.015161 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:38:17.015169 | orchestrator | 2026-03-18 02:38:17.015210 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-18 02:38:17.015218 | orchestrator | Wednesday 18 March 2026 02:37:22 +0000 (0:00:00.613) 0:05:19.228 ******* 2026-03-18 02:38:17.015226 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.015233 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:38:17.015241 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:38:17.015248 | orchestrator | 2026-03-18 02:38:17.015255 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-18 02:38:17.015263 | orchestrator | Wednesday 18 March 2026 02:37:23 +0000 (0:00:00.374) 0:05:19.602 ******* 2026-03-18 02:38:17.015270 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.015277 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:38:17.015284 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:38:17.015291 | orchestrator | 2026-03-18 02:38:17.015299 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-18 02:38:17.015306 | orchestrator | Wednesday 18 March 2026 02:37:23 +0000 (0:00:00.652) 0:05:20.254 ******* 2026-03-18 02:38:17.015313 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:38:17.015321 | orchestrator | 2026-03-18 02:38:17.015329 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-18 02:38:17.015336 | orchestrator | Wednesday 18 March 2026 02:37:24 +0000 (0:00:00.590) 0:05:20.845 ******* 2026-03-18 02:38:17.015343 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.015350 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.015358 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.015365 | orchestrator | 2026-03-18 02:38:17.015374 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-18 02:38:17.015382 | orchestrator | Wednesday 18 March 2026 02:37:25 +0000 (0:00:01.317) 0:05:22.162 ******* 2026-03-18 02:38:17.015391 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.015399 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.015407 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.015416 | orchestrator | 2026-03-18 02:38:17.015424 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-18 02:38:17.015433 | orchestrator | Wednesday 18 March 2026 02:37:27 +0000 (0:00:01.620) 0:05:23.783 ******* 2026-03-18 02:38:17.015441 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.015449 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.015457 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.015465 | orchestrator | 2026-03-18 02:38:17.015474 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-18 02:38:17.015483 | orchestrator | Wednesday 18 March 2026 02:37:30 +0000 (0:00:02.871) 0:05:26.654 ******* 2026-03-18 02:38:17.015492 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.015500 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.015509 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.015516 | orchestrator | 2026-03-18 02:38:17.015537 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 02:38:17.015546 | orchestrator | Wednesday 18 March 2026 02:37:32 +0000 (0:00:02.060) 0:05:28.715 ******* 2026-03-18 02:38:17.015554 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.015562 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:38:17.015571 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-18 02:38:17.015579 | orchestrator | 2026-03-18 02:38:17.015588 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-18 02:38:17.015596 | orchestrator | Wednesday 18 March 2026 02:37:33 +0000 (0:00:00.705) 0:05:29.421 ******* 2026-03-18 02:38:17.015611 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-18 02:38:17.015619 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-18 02:38:17.015628 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-18 02:38:17.015649 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-18 02:38:17.015659 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:38:17.015667 | orchestrator | 2026-03-18 02:38:17.015675 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-18 02:38:17.015684 | orchestrator | Wednesday 18 March 2026 02:37:57 +0000 (0:00:24.349) 0:05:53.770 ******* 2026-03-18 02:38:17.015692 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:38:17.015701 | orchestrator | 2026-03-18 02:38:17.015709 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-18 02:38:17.015716 | orchestrator | Wednesday 18 March 2026 02:37:58 +0000 (0:00:01.356) 0:05:55.126 ******* 2026-03-18 02:38:17.015723 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:38:17.015731 | orchestrator | 2026-03-18 02:38:17.015738 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-18 02:38:17.015745 | orchestrator | Wednesday 18 March 2026 02:37:59 +0000 (0:00:00.318) 0:05:55.445 ******* 2026-03-18 02:38:17.015752 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:38:17.015760 | orchestrator | 2026-03-18 02:38:17.015767 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-18 02:38:17.015774 | orchestrator | Wednesday 18 March 2026 02:37:59 +0000 (0:00:00.169) 0:05:55.614 ******* 2026-03-18 02:38:17.015781 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-18 02:38:17.015789 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-18 02:38:17.015796 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-18 02:38:17.015803 | orchestrator | 2026-03-18 02:38:17.015810 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-18 02:38:17.015817 | orchestrator | Wednesday 18 March 2026 02:38:05 +0000 (0:00:06.304) 0:06:01.918 ******* 2026-03-18 02:38:17.015825 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-18 02:38:17.015833 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-18 02:38:17.015840 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-18 02:38:17.015847 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-18 02:38:17.015855 | orchestrator | 2026-03-18 02:38:17.015862 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:38:17.015869 | orchestrator | Wednesday 18 March 2026 02:38:10 +0000 (0:00:05.084) 0:06:07.003 ******* 2026-03-18 02:38:17.015876 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.015884 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.015891 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.015898 | orchestrator | 2026-03-18 02:38:17.015906 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-18 02:38:17.015913 | orchestrator | Wednesday 18 March 2026 02:38:11 +0000 (0:00:00.699) 0:06:07.702 ******* 2026-03-18 02:38:17.015920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:38:17.015927 | orchestrator | 2026-03-18 02:38:17.015935 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-18 02:38:17.015942 | orchestrator | Wednesday 18 March 2026 02:38:12 +0000 (0:00:00.580) 0:06:08.282 ******* 2026-03-18 02:38:17.015949 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:38:17.015962 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:38:17.015969 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:38:17.015976 | orchestrator | 2026-03-18 02:38:17.015983 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-18 02:38:17.015991 | orchestrator | Wednesday 18 March 2026 02:38:12 +0000 (0:00:00.647) 0:06:08.929 ******* 2026-03-18 02:38:17.015998 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:38:17.016005 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:38:17.016013 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:38:17.016020 | orchestrator | 2026-03-18 02:38:17.016027 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-18 02:38:17.016034 | orchestrator | Wednesday 18 March 2026 02:38:13 +0000 (0:00:01.207) 0:06:10.137 ******* 2026-03-18 02:38:17.016041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 02:38:17.016049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 02:38:17.016056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 02:38:17.016063 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:38:17.016071 | orchestrator | 2026-03-18 02:38:17.016078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-18 02:38:17.016090 | orchestrator | Wednesday 18 March 2026 02:38:14 +0000 (0:00:00.682) 0:06:10.819 ******* 2026-03-18 02:38:17.016097 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:38:17.016104 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:38:17.016112 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:38:17.016124 | orchestrator | 2026-03-18 02:38:17.016136 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-18 02:38:17.016149 | orchestrator | 2026-03-18 02:38:17.016161 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:38:17.016172 | orchestrator | Wednesday 18 March 2026 02:38:15 +0000 (0:00:00.652) 0:06:11.472 ******* 2026-03-18 02:38:17.016206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:38:17.016221 | orchestrator | 2026-03-18 02:38:17.016232 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:38:17.016244 | orchestrator | Wednesday 18 March 2026 02:38:16 +0000 (0:00:00.962) 0:06:12.435 ******* 2026-03-18 02:38:17.016256 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:38:17.016266 | orchestrator | 2026-03-18 02:38:17.016286 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:38:33.692522 | orchestrator | Wednesday 18 March 2026 02:38:16 +0000 (0:00:00.815) 0:06:13.250 ******* 2026-03-18 02:38:33.692694 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.692721 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.692740 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.692757 | orchestrator | 2026-03-18 02:38:33.692777 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:38:33.692796 | orchestrator | Wednesday 18 March 2026 02:38:17 +0000 (0:00:00.352) 0:06:13.603 ******* 2026-03-18 02:38:33.692814 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.692833 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.692852 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.692867 | orchestrator | 2026-03-18 02:38:33.692877 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:38:33.692887 | orchestrator | Wednesday 18 March 2026 02:38:18 +0000 (0:00:00.686) 0:06:14.289 ******* 2026-03-18 02:38:33.692896 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.692906 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.692916 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.692928 | orchestrator | 2026-03-18 02:38:33.692945 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:38:33.692959 | orchestrator | Wednesday 18 March 2026 02:38:18 +0000 (0:00:00.738) 0:06:15.027 ******* 2026-03-18 02:38:33.692993 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.693004 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.693013 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.693023 | orchestrator | 2026-03-18 02:38:33.693033 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:38:33.693042 | orchestrator | Wednesday 18 March 2026 02:38:19 +0000 (0:00:01.168) 0:06:16.195 ******* 2026-03-18 02:38:33.693052 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.693063 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.693074 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.693085 | orchestrator | 2026-03-18 02:38:33.693096 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:38:33.693108 | orchestrator | Wednesday 18 March 2026 02:38:20 +0000 (0:00:00.353) 0:06:16.549 ******* 2026-03-18 02:38:33.693125 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.693153 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.693195 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.693213 | orchestrator | 2026-03-18 02:38:33.693229 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:38:33.693244 | orchestrator | Wednesday 18 March 2026 02:38:20 +0000 (0:00:00.338) 0:06:16.888 ******* 2026-03-18 02:38:33.693259 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.693276 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.693293 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.693309 | orchestrator | 2026-03-18 02:38:33.693325 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:38:33.693341 | orchestrator | Wednesday 18 March 2026 02:38:20 +0000 (0:00:00.320) 0:06:17.208 ******* 2026-03-18 02:38:33.693355 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.693371 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.693387 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.693404 | orchestrator | 2026-03-18 02:38:33.693421 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:38:33.693437 | orchestrator | Wednesday 18 March 2026 02:38:21 +0000 (0:00:01.000) 0:06:18.208 ******* 2026-03-18 02:38:33.693530 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.693548 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.693563 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.693577 | orchestrator | 2026-03-18 02:38:33.693594 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:38:33.693611 | orchestrator | Wednesday 18 March 2026 02:38:22 +0000 (0:00:00.726) 0:06:18.935 ******* 2026-03-18 02:38:33.693628 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.693645 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.693660 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.693675 | orchestrator | 2026-03-18 02:38:33.693692 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:38:33.693710 | orchestrator | Wednesday 18 March 2026 02:38:23 +0000 (0:00:00.325) 0:06:19.260 ******* 2026-03-18 02:38:33.693726 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.693741 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.693756 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.693772 | orchestrator | 2026-03-18 02:38:33.693789 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:38:33.693804 | orchestrator | Wednesday 18 March 2026 02:38:23 +0000 (0:00:00.384) 0:06:19.645 ******* 2026-03-18 02:38:33.693821 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.693836 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.693851 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.693867 | orchestrator | 2026-03-18 02:38:33.693883 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:38:33.693923 | orchestrator | Wednesday 18 March 2026 02:38:24 +0000 (0:00:00.686) 0:06:20.331 ******* 2026-03-18 02:38:33.693941 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.693972 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.693988 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.694003 | orchestrator | 2026-03-18 02:38:33.694113 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:38:33.694138 | orchestrator | Wednesday 18 March 2026 02:38:24 +0000 (0:00:00.365) 0:06:20.697 ******* 2026-03-18 02:38:33.694153 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.694196 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.694213 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.694227 | orchestrator | 2026-03-18 02:38:33.694244 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:38:33.694261 | orchestrator | Wednesday 18 March 2026 02:38:24 +0000 (0:00:00.386) 0:06:21.084 ******* 2026-03-18 02:38:33.694277 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.694293 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.694311 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.694326 | orchestrator | 2026-03-18 02:38:33.694343 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:38:33.694361 | orchestrator | Wednesday 18 March 2026 02:38:25 +0000 (0:00:00.363) 0:06:21.447 ******* 2026-03-18 02:38:33.694408 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.694427 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.694443 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.694460 | orchestrator | 2026-03-18 02:38:33.694477 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:38:33.694492 | orchestrator | Wednesday 18 March 2026 02:38:25 +0000 (0:00:00.636) 0:06:22.084 ******* 2026-03-18 02:38:33.694508 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.694525 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.694542 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.694559 | orchestrator | 2026-03-18 02:38:33.694575 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:38:33.694591 | orchestrator | Wednesday 18 March 2026 02:38:26 +0000 (0:00:00.393) 0:06:22.477 ******* 2026-03-18 02:38:33.694606 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.694621 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.694638 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.694653 | orchestrator | 2026-03-18 02:38:33.694668 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:38:33.694685 | orchestrator | Wednesday 18 March 2026 02:38:26 +0000 (0:00:00.354) 0:06:22.832 ******* 2026-03-18 02:38:33.694701 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.694717 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.694733 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.694749 | orchestrator | 2026-03-18 02:38:33.694765 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-18 02:38:33.694780 | orchestrator | Wednesday 18 March 2026 02:38:27 +0000 (0:00:00.909) 0:06:23.741 ******* 2026-03-18 02:38:33.694876 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.694911 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.694927 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.694943 | orchestrator | 2026-03-18 02:38:33.694959 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-18 02:38:33.694976 | orchestrator | Wednesday 18 March 2026 02:38:27 +0000 (0:00:00.374) 0:06:24.115 ******* 2026-03-18 02:38:33.694991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:38:33.695008 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:38:33.695023 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:38:33.695038 | orchestrator | 2026-03-18 02:38:33.695053 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-18 02:38:33.695071 | orchestrator | Wednesday 18 March 2026 02:38:28 +0000 (0:00:00.689) 0:06:24.805 ******* 2026-03-18 02:38:33.695087 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:38:33.695122 | orchestrator | 2026-03-18 02:38:33.695139 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-18 02:38:33.695156 | orchestrator | Wednesday 18 March 2026 02:38:29 +0000 (0:00:00.552) 0:06:25.358 ******* 2026-03-18 02:38:33.695256 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.695277 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.695293 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.695308 | orchestrator | 2026-03-18 02:38:33.695322 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-18 02:38:33.695335 | orchestrator | Wednesday 18 March 2026 02:38:29 +0000 (0:00:00.614) 0:06:25.973 ******* 2026-03-18 02:38:33.695347 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:38:33.695360 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:38:33.695371 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:38:33.695385 | orchestrator | 2026-03-18 02:38:33.695397 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-18 02:38:33.695411 | orchestrator | Wednesday 18 March 2026 02:38:30 +0000 (0:00:00.389) 0:06:26.362 ******* 2026-03-18 02:38:33.695424 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.695438 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.695451 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.695464 | orchestrator | 2026-03-18 02:38:33.695477 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-18 02:38:33.695490 | orchestrator | Wednesday 18 March 2026 02:38:30 +0000 (0:00:00.646) 0:06:27.009 ******* 2026-03-18 02:38:33.695504 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:38:33.695517 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:38:33.695530 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:38:33.695543 | orchestrator | 2026-03-18 02:38:33.695557 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-18 02:38:33.695570 | orchestrator | Wednesday 18 March 2026 02:38:31 +0000 (0:00:00.684) 0:06:27.693 ******* 2026-03-18 02:38:33.695583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 02:38:33.695609 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 02:38:33.695623 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 02:38:33.695636 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 02:38:33.695649 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 02:38:33.695662 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 02:38:33.695675 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 02:38:33.695687 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 02:38:33.695701 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 02:38:33.695714 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 02:38:33.695744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 02:39:39.295047 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 02:39:39.295185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 02:39:39.295200 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 02:39:39.295213 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 02:39:39.295224 | orchestrator | 2026-03-18 02:39:39.295236 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-18 02:39:39.295270 | orchestrator | Wednesday 18 March 2026 02:38:33 +0000 (0:00:02.232) 0:06:29.926 ******* 2026-03-18 02:39:39.295281 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:39:39.295293 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:39:39.295303 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:39:39.295313 | orchestrator | 2026-03-18 02:39:39.295323 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-18 02:39:39.295333 | orchestrator | Wednesday 18 March 2026 02:38:34 +0000 (0:00:00.360) 0:06:30.286 ******* 2026-03-18 02:39:39.295344 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:39:39.295356 | orchestrator | 2026-03-18 02:39:39.295366 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-18 02:39:39.295377 | orchestrator | Wednesday 18 March 2026 02:38:34 +0000 (0:00:00.861) 0:06:31.147 ******* 2026-03-18 02:39:39.295387 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 02:39:39.295398 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 02:39:39.295409 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 02:39:39.295420 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-18 02:39:39.295431 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-18 02:39:39.295442 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-18 02:39:39.295452 | orchestrator | 2026-03-18 02:39:39.295462 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-18 02:39:39.295473 | orchestrator | Wednesday 18 March 2026 02:38:35 +0000 (0:00:01.097) 0:06:32.245 ******* 2026-03-18 02:39:39.295483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:39:39.295493 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:39:39.295504 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:39:39.295514 | orchestrator | 2026-03-18 02:39:39.295524 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-18 02:39:39.295534 | orchestrator | Wednesday 18 March 2026 02:38:38 +0000 (0:00:02.031) 0:06:34.277 ******* 2026-03-18 02:39:39.295545 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 02:39:39.295555 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:39:39.295566 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:39:39.295577 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 02:39:39.295588 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 02:39:39.295599 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:39:39.295610 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 02:39:39.295621 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 02:39:39.295631 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:39:39.295642 | orchestrator | 2026-03-18 02:39:39.295653 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-18 02:39:39.295664 | orchestrator | Wednesday 18 March 2026 02:38:39 +0000 (0:00:01.172) 0:06:35.449 ******* 2026-03-18 02:39:39.295675 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:39:39.295686 | orchestrator | 2026-03-18 02:39:39.295697 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-18 02:39:39.295708 | orchestrator | Wednesday 18 March 2026 02:38:41 +0000 (0:00:02.054) 0:06:37.503 ******* 2026-03-18 02:39:39.295719 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:39:39.295730 | orchestrator | 2026-03-18 02:39:39.295741 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-18 02:39:39.295751 | orchestrator | Wednesday 18 March 2026 02:38:42 +0000 (0:00:00.895) 0:06:38.399 ******* 2026-03-18 02:39:39.295780 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}) 2026-03-18 02:39:39.295799 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}) 2026-03-18 02:39:39.295810 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}) 2026-03-18 02:39:39.295822 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}) 2026-03-18 02:39:39.295832 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}) 2026-03-18 02:39:39.295860 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}) 2026-03-18 02:39:39.295871 | orchestrator | 2026-03-18 02:39:39.295882 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-18 02:39:39.295893 | orchestrator | Wednesday 18 March 2026 02:39:22 +0000 (0:00:39.950) 0:07:18.350 ******* 2026-03-18 02:39:39.295903 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:39:39.295914 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:39:39.295925 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:39:39.295935 | orchestrator | 2026-03-18 02:39:39.295945 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-18 02:39:39.295956 | orchestrator | Wednesday 18 March 2026 02:39:22 +0000 (0:00:00.328) 0:07:18.678 ******* 2026-03-18 02:39:39.295966 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:39:39.295976 | orchestrator | 2026-03-18 02:39:39.295986 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-18 02:39:39.295996 | orchestrator | Wednesday 18 March 2026 02:39:23 +0000 (0:00:00.885) 0:07:19.564 ******* 2026-03-18 02:39:39.296007 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:39:39.296018 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:39:39.296028 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:39:39.296038 | orchestrator | 2026-03-18 02:39:39.296048 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-18 02:39:39.296058 | orchestrator | Wednesday 18 March 2026 02:39:23 +0000 (0:00:00.686) 0:07:20.251 ******* 2026-03-18 02:39:39.296069 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:39:39.296079 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:39:39.296089 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:39:39.296099 | orchestrator | 2026-03-18 02:39:39.296110 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-18 02:39:39.296120 | orchestrator | Wednesday 18 March 2026 02:39:26 +0000 (0:00:02.606) 0:07:22.858 ******* 2026-03-18 02:39:39.296130 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:39:39.296140 | orchestrator | 2026-03-18 02:39:39.296151 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-18 02:39:39.296213 | orchestrator | Wednesday 18 March 2026 02:39:27 +0000 (0:00:00.865) 0:07:23.723 ******* 2026-03-18 02:39:39.296223 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:39:39.296232 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:39:39.296242 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:39:39.296252 | orchestrator | 2026-03-18 02:39:39.296261 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-18 02:39:39.296271 | orchestrator | Wednesday 18 March 2026 02:39:28 +0000 (0:00:01.243) 0:07:24.967 ******* 2026-03-18 02:39:39.296280 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:39:39.296289 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:39:39.296298 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:39:39.296315 | orchestrator | 2026-03-18 02:39:39.296324 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-18 02:39:39.296334 | orchestrator | Wednesday 18 March 2026 02:39:29 +0000 (0:00:01.163) 0:07:26.130 ******* 2026-03-18 02:39:39.296343 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:39:39.296352 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:39:39.296362 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:39:39.296371 | orchestrator | 2026-03-18 02:39:39.296380 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-18 02:39:39.296390 | orchestrator | Wednesday 18 March 2026 02:39:31 +0000 (0:00:01.971) 0:07:28.102 ******* 2026-03-18 02:39:39.296399 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:39:39.296408 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:39:39.296418 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:39:39.296427 | orchestrator | 2026-03-18 02:39:39.296436 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-18 02:39:39.296446 | orchestrator | Wednesday 18 March 2026 02:39:32 +0000 (0:00:00.379) 0:07:28.481 ******* 2026-03-18 02:39:39.296455 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:39:39.296465 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:39:39.296474 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:39:39.296483 | orchestrator | 2026-03-18 02:39:39.296492 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-18 02:39:39.296502 | orchestrator | Wednesday 18 March 2026 02:39:32 +0000 (0:00:00.350) 0:07:28.832 ******* 2026-03-18 02:39:39.296511 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-18 02:39:39.296521 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 02:39:39.296530 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-18 02:39:39.296539 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-18 02:39:39.296549 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-18 02:39:39.296558 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-18 02:39:39.296567 | orchestrator | 2026-03-18 02:39:39.296581 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-18 02:39:39.296591 | orchestrator | Wednesday 18 March 2026 02:39:33 +0000 (0:00:01.045) 0:07:29.877 ******* 2026-03-18 02:39:39.296600 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-18 02:39:39.296610 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-18 02:39:39.296619 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-18 02:39:39.296628 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-18 02:39:39.296637 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-18 02:39:39.296647 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-18 02:39:39.296656 | orchestrator | 2026-03-18 02:39:39.296665 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-18 02:39:39.296675 | orchestrator | Wednesday 18 March 2026 02:39:35 +0000 (0:00:02.320) 0:07:32.198 ******* 2026-03-18 02:39:39.296684 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-18 02:39:39.296693 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-18 02:39:39.296703 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-18 02:39:39.296712 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-18 02:39:39.296721 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-18 02:39:39.296731 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-18 02:39:39.296740 | orchestrator | 2026-03-18 02:39:39.296755 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-18 02:40:12.613549 | orchestrator | Wednesday 18 March 2026 02:39:39 +0000 (0:00:03.327) 0:07:35.526 ******* 2026-03-18 02:40:12.613733 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.613752 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.613764 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:40:12.613775 | orchestrator | 2026-03-18 02:40:12.613787 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-18 02:40:12.613823 | orchestrator | Wednesday 18 March 2026 02:39:42 +0000 (0:00:03.025) 0:07:38.551 ******* 2026-03-18 02:40:12.613835 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.613846 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.613857 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-18 02:40:12.613868 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:40:12.613879 | orchestrator | 2026-03-18 02:40:12.613890 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-18 02:40:12.613901 | orchestrator | Wednesday 18 March 2026 02:39:54 +0000 (0:00:12.490) 0:07:51.042 ******* 2026-03-18 02:40:12.613911 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.613922 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.613932 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.613943 | orchestrator | 2026-03-18 02:40:12.613954 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:40:12.613964 | orchestrator | Wednesday 18 March 2026 02:39:56 +0000 (0:00:01.287) 0:07:52.329 ******* 2026-03-18 02:40:12.613976 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.613987 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.613998 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.614008 | orchestrator | 2026-03-18 02:40:12.614081 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-18 02:40:12.614095 | orchestrator | Wednesday 18 March 2026 02:39:56 +0000 (0:00:00.395) 0:07:52.724 ******* 2026-03-18 02:40:12.614108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:40:12.614120 | orchestrator | 2026-03-18 02:40:12.614132 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-18 02:40:12.614167 | orchestrator | Wednesday 18 March 2026 02:39:57 +0000 (0:00:00.897) 0:07:53.622 ******* 2026-03-18 02:40:12.614190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:40:12.614217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:40:12.614228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:40:12.614239 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614249 | orchestrator | 2026-03-18 02:40:12.614260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-18 02:40:12.614271 | orchestrator | Wednesday 18 March 2026 02:39:57 +0000 (0:00:00.454) 0:07:54.077 ******* 2026-03-18 02:40:12.614281 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614292 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.614303 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.614314 | orchestrator | 2026-03-18 02:40:12.614324 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-18 02:40:12.614335 | orchestrator | Wednesday 18 March 2026 02:39:58 +0000 (0:00:00.340) 0:07:54.418 ******* 2026-03-18 02:40:12.614346 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614356 | orchestrator | 2026-03-18 02:40:12.614367 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-18 02:40:12.614378 | orchestrator | Wednesday 18 March 2026 02:39:58 +0000 (0:00:00.245) 0:07:54.663 ******* 2026-03-18 02:40:12.614388 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614399 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.614410 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.614420 | orchestrator | 2026-03-18 02:40:12.614431 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-18 02:40:12.614441 | orchestrator | Wednesday 18 March 2026 02:39:59 +0000 (0:00:00.646) 0:07:55.310 ******* 2026-03-18 02:40:12.614452 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614463 | orchestrator | 2026-03-18 02:40:12.614473 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-18 02:40:12.614484 | orchestrator | Wednesday 18 March 2026 02:39:59 +0000 (0:00:00.267) 0:07:55.577 ******* 2026-03-18 02:40:12.614506 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614517 | orchestrator | 2026-03-18 02:40:12.614542 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-18 02:40:12.614553 | orchestrator | Wednesday 18 March 2026 02:39:59 +0000 (0:00:00.275) 0:07:55.853 ******* 2026-03-18 02:40:12.614563 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614574 | orchestrator | 2026-03-18 02:40:12.614585 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-18 02:40:12.614595 | orchestrator | Wednesday 18 March 2026 02:39:59 +0000 (0:00:00.133) 0:07:55.986 ******* 2026-03-18 02:40:12.614606 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614617 | orchestrator | 2026-03-18 02:40:12.614627 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-18 02:40:12.614638 | orchestrator | Wednesday 18 March 2026 02:39:59 +0000 (0:00:00.245) 0:07:56.232 ******* 2026-03-18 02:40:12.614649 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614659 | orchestrator | 2026-03-18 02:40:12.614670 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-18 02:40:12.614681 | orchestrator | Wednesday 18 March 2026 02:40:00 +0000 (0:00:00.286) 0:07:56.519 ******* 2026-03-18 02:40:12.614692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:40:12.614703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:40:12.614713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:40:12.614724 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614735 | orchestrator | 2026-03-18 02:40:12.614765 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-18 02:40:12.614777 | orchestrator | Wednesday 18 March 2026 02:40:00 +0000 (0:00:00.466) 0:07:56.986 ******* 2026-03-18 02:40:12.614787 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614798 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.614809 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.614819 | orchestrator | 2026-03-18 02:40:12.614830 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-18 02:40:12.614840 | orchestrator | Wednesday 18 March 2026 02:40:01 +0000 (0:00:00.357) 0:07:57.343 ******* 2026-03-18 02:40:12.614851 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614861 | orchestrator | 2026-03-18 02:40:12.614872 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-18 02:40:12.614883 | orchestrator | Wednesday 18 March 2026 02:40:01 +0000 (0:00:00.230) 0:07:57.574 ******* 2026-03-18 02:40:12.614893 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.614904 | orchestrator | 2026-03-18 02:40:12.614914 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-18 02:40:12.614937 | orchestrator | 2026-03-18 02:40:12.614948 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:40:12.614959 | orchestrator | Wednesday 18 March 2026 02:40:02 +0000 (0:00:01.399) 0:07:58.974 ******* 2026-03-18 02:40:12.614971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:40:12.614983 | orchestrator | 2026-03-18 02:40:12.614994 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:40:12.615004 | orchestrator | Wednesday 18 March 2026 02:40:04 +0000 (0:00:01.379) 0:08:00.354 ******* 2026-03-18 02:40:12.615016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:40:12.615026 | orchestrator | 2026-03-18 02:40:12.615037 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:40:12.615047 | orchestrator | Wednesday 18 March 2026 02:40:05 +0000 (0:00:01.491) 0:08:01.846 ******* 2026-03-18 02:40:12.615058 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.615077 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.615088 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.615099 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:12.615111 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:12.615122 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:12.615133 | orchestrator | 2026-03-18 02:40:12.615144 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:40:12.615182 | orchestrator | Wednesday 18 March 2026 02:40:07 +0000 (0:00:01.475) 0:08:03.322 ******* 2026-03-18 02:40:12.615193 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:12.615203 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:12.615214 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:12.615225 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:12.615235 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:12.615246 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:12.615256 | orchestrator | 2026-03-18 02:40:12.615267 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:40:12.615277 | orchestrator | Wednesday 18 March 2026 02:40:07 +0000 (0:00:00.742) 0:08:04.065 ******* 2026-03-18 02:40:12.615288 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:12.615298 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:12.615309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:12.615319 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:12.615330 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:12.615340 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:12.615351 | orchestrator | 2026-03-18 02:40:12.615361 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:40:12.615372 | orchestrator | Wednesday 18 March 2026 02:40:08 +0000 (0:00:01.037) 0:08:05.103 ******* 2026-03-18 02:40:12.615383 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:12.615393 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:12.615404 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:12.615414 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:12.615425 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:12.615435 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:12.615446 | orchestrator | 2026-03-18 02:40:12.615456 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:40:12.615467 | orchestrator | Wednesday 18 March 2026 02:40:09 +0000 (0:00:00.691) 0:08:05.794 ******* 2026-03-18 02:40:12.615477 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.615494 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.615505 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.615516 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:12.615526 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:12.615537 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:12.615548 | orchestrator | 2026-03-18 02:40:12.615558 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:40:12.615569 | orchestrator | Wednesday 18 March 2026 02:40:10 +0000 (0:00:01.412) 0:08:07.207 ******* 2026-03-18 02:40:12.615580 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.615590 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.615601 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.615611 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:12.615622 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:12.615632 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:12.615643 | orchestrator | 2026-03-18 02:40:12.615654 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:40:12.615665 | orchestrator | Wednesday 18 March 2026 02:40:11 +0000 (0:00:00.681) 0:08:07.888 ******* 2026-03-18 02:40:12.615675 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:12.615686 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:12.615697 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:12.615707 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:12.615718 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:12.615736 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:12.615905 | orchestrator | 2026-03-18 02:40:12.615929 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:40:46.530844 | orchestrator | Wednesday 18 March 2026 02:40:12 +0000 (0:00:00.963) 0:08:08.852 ******* 2026-03-18 02:40:46.530956 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.530973 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.530985 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.530996 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.531006 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.531017 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.531028 | orchestrator | 2026-03-18 02:40:46.531039 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:40:46.531051 | orchestrator | Wednesday 18 March 2026 02:40:13 +0000 (0:00:01.103) 0:08:09.955 ******* 2026-03-18 02:40:46.531061 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.531072 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.531083 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.531093 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.531104 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.531115 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.531125 | orchestrator | 2026-03-18 02:40:46.531136 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:40:46.531171 | orchestrator | Wednesday 18 March 2026 02:40:15 +0000 (0:00:01.385) 0:08:11.341 ******* 2026-03-18 02:40:46.531182 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:46.531195 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:46.531205 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:46.531217 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531228 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531239 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531250 | orchestrator | 2026-03-18 02:40:46.531261 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:40:46.531272 | orchestrator | Wednesday 18 March 2026 02:40:15 +0000 (0:00:00.709) 0:08:12.050 ******* 2026-03-18 02:40:46.531283 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:46.531293 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:46.531304 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:46.531315 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.531326 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.531337 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.531347 | orchestrator | 2026-03-18 02:40:46.531358 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:40:46.531370 | orchestrator | Wednesday 18 March 2026 02:40:16 +0000 (0:00:01.047) 0:08:13.097 ******* 2026-03-18 02:40:46.531381 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.531391 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.531402 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.531413 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531424 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531434 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531445 | orchestrator | 2026-03-18 02:40:46.531456 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:40:46.531467 | orchestrator | Wednesday 18 March 2026 02:40:17 +0000 (0:00:00.692) 0:08:13.790 ******* 2026-03-18 02:40:46.531478 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.531489 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.531499 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.531510 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531521 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531532 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531543 | orchestrator | 2026-03-18 02:40:46.531554 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:40:46.531564 | orchestrator | Wednesday 18 March 2026 02:40:18 +0000 (0:00:01.048) 0:08:14.838 ******* 2026-03-18 02:40:46.531599 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.531611 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.531621 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.531632 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531643 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531653 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531664 | orchestrator | 2026-03-18 02:40:46.531675 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:40:46.531686 | orchestrator | Wednesday 18 March 2026 02:40:19 +0000 (0:00:00.681) 0:08:15.519 ******* 2026-03-18 02:40:46.531696 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:46.531707 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:46.531718 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:46.531728 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531739 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531750 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531760 | orchestrator | 2026-03-18 02:40:46.531771 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:40:46.531789 | orchestrator | Wednesday 18 March 2026 02:40:20 +0000 (0:00:00.916) 0:08:16.436 ******* 2026-03-18 02:40:46.531808 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:46.531827 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:46.531844 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:46.531861 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:40:46.531879 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:40:46.531896 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:40:46.531913 | orchestrator | 2026-03-18 02:40:46.531932 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:40:46.531951 | orchestrator | Wednesday 18 March 2026 02:40:20 +0000 (0:00:00.710) 0:08:17.146 ******* 2026-03-18 02:40:46.531969 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:40:46.531988 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:40:46.532007 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:40:46.532022 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.532033 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.532044 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.532055 | orchestrator | 2026-03-18 02:40:46.532066 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:40:46.532076 | orchestrator | Wednesday 18 March 2026 02:40:21 +0000 (0:00:01.084) 0:08:18.231 ******* 2026-03-18 02:40:46.532087 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.532097 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.532108 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.532119 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.532129 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.532158 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.532169 | orchestrator | 2026-03-18 02:40:46.532199 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:40:46.532210 | orchestrator | Wednesday 18 March 2026 02:40:22 +0000 (0:00:00.742) 0:08:18.973 ******* 2026-03-18 02:40:46.532221 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.532232 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.532242 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.532253 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.532263 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.532274 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.532284 | orchestrator | 2026-03-18 02:40:46.532343 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-18 02:40:46.532354 | orchestrator | Wednesday 18 March 2026 02:40:24 +0000 (0:00:01.561) 0:08:20.535 ******* 2026-03-18 02:40:46.532365 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:40:46.532376 | orchestrator | 2026-03-18 02:40:46.532387 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-18 02:40:46.532408 | orchestrator | Wednesday 18 March 2026 02:40:28 +0000 (0:00:04.117) 0:08:24.652 ******* 2026-03-18 02:40:46.532419 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:40:46.532430 | orchestrator | 2026-03-18 02:40:46.532441 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-18 02:40:46.532452 | orchestrator | Wednesday 18 March 2026 02:40:30 +0000 (0:00:02.518) 0:08:27.171 ******* 2026-03-18 02:40:46.532462 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:40:46.532473 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:40:46.532483 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:40:46.532494 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.532504 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:40:46.532515 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:40:46.532526 | orchestrator | 2026-03-18 02:40:46.532536 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-18 02:40:46.532547 | orchestrator | Wednesday 18 March 2026 02:40:32 +0000 (0:00:01.588) 0:08:28.760 ******* 2026-03-18 02:40:46.532557 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:40:46.532568 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:40:46.532578 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:40:46.532589 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:40:46.532599 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:40:46.532609 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:40:46.532620 | orchestrator | 2026-03-18 02:40:46.532630 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-18 02:40:46.532641 | orchestrator | Wednesday 18 March 2026 02:40:33 +0000 (0:00:01.294) 0:08:30.055 ******* 2026-03-18 02:40:46.532652 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:40:46.532664 | orchestrator | 2026-03-18 02:40:46.532675 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-18 02:40:46.532686 | orchestrator | Wednesday 18 March 2026 02:40:35 +0000 (0:00:01.493) 0:08:31.549 ******* 2026-03-18 02:40:46.532696 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:40:46.532707 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:40:46.532717 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:40:46.532728 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:40:46.532739 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:40:46.532750 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:40:46.532760 | orchestrator | 2026-03-18 02:40:46.532771 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-18 02:40:46.532781 | orchestrator | Wednesday 18 March 2026 02:40:36 +0000 (0:00:01.700) 0:08:33.250 ******* 2026-03-18 02:40:46.532792 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:40:46.532802 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:40:46.532813 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:40:46.532823 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:40:46.532833 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:40:46.532844 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:40:46.532854 | orchestrator | 2026-03-18 02:40:46.532865 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-18 02:40:46.532875 | orchestrator | Wednesday 18 March 2026 02:40:40 +0000 (0:00:03.792) 0:08:37.042 ******* 2026-03-18 02:40:46.532886 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:40:46.532897 | orchestrator | 2026-03-18 02:40:46.532908 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-18 02:40:46.532924 | orchestrator | Wednesday 18 March 2026 02:40:42 +0000 (0:00:01.376) 0:08:38.418 ******* 2026-03-18 02:40:46.532935 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.532946 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.532956 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.532973 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.532983 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.532994 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.533004 | orchestrator | 2026-03-18 02:40:46.533015 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-18 02:40:46.533025 | orchestrator | Wednesday 18 March 2026 02:40:42 +0000 (0:00:00.735) 0:08:39.154 ******* 2026-03-18 02:40:46.533036 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:40:46.533046 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:40:46.533057 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:40:46.533067 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:40:46.533078 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:40:46.533088 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:40:46.533098 | orchestrator | 2026-03-18 02:40:46.533109 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-18 02:40:46.533120 | orchestrator | Wednesday 18 March 2026 02:40:45 +0000 (0:00:02.640) 0:08:41.794 ******* 2026-03-18 02:40:46.533130 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:40:46.533211 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:40:46.533224 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:40:46.533235 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:40:46.533245 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:40:46.533256 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:40:46.533267 | orchestrator | 2026-03-18 02:40:46.533287 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-18 02:41:15.168381 | orchestrator | 2026-03-18 02:41:15.168507 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:41:15.168532 | orchestrator | Wednesday 18 March 2026 02:40:46 +0000 (0:00:00.973) 0:08:42.768 ******* 2026-03-18 02:41:15.168551 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:15.168570 | orchestrator | 2026-03-18 02:41:15.168587 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:41:15.168604 | orchestrator | Wednesday 18 March 2026 02:40:47 +0000 (0:00:00.897) 0:08:43.666 ******* 2026-03-18 02:41:15.168621 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:15.168637 | orchestrator | 2026-03-18 02:41:15.168653 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:41:15.168671 | orchestrator | Wednesday 18 March 2026 02:40:47 +0000 (0:00:00.565) 0:08:44.231 ******* 2026-03-18 02:41:15.168687 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.168706 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.168721 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.168738 | orchestrator | 2026-03-18 02:41:15.168754 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:41:15.168772 | orchestrator | Wednesday 18 March 2026 02:40:48 +0000 (0:00:00.646) 0:08:44.878 ******* 2026-03-18 02:41:15.168788 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.168805 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.168821 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.168837 | orchestrator | 2026-03-18 02:41:15.168853 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:41:15.168871 | orchestrator | Wednesday 18 March 2026 02:40:49 +0000 (0:00:00.738) 0:08:45.617 ******* 2026-03-18 02:41:15.168886 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.168903 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.168919 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.168936 | orchestrator | 2026-03-18 02:41:15.168953 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:41:15.168969 | orchestrator | Wednesday 18 March 2026 02:40:50 +0000 (0:00:00.737) 0:08:46.354 ******* 2026-03-18 02:41:15.168986 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169003 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169049 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169066 | orchestrator | 2026-03-18 02:41:15.169082 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:41:15.169099 | orchestrator | Wednesday 18 March 2026 02:40:51 +0000 (0:00:01.004) 0:08:47.359 ******* 2026-03-18 02:41:15.169115 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169132 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169176 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169194 | orchestrator | 2026-03-18 02:41:15.169211 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:41:15.169227 | orchestrator | Wednesday 18 March 2026 02:40:51 +0000 (0:00:00.379) 0:08:47.739 ******* 2026-03-18 02:41:15.169243 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169255 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169267 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169277 | orchestrator | 2026-03-18 02:41:15.169286 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:41:15.169296 | orchestrator | Wednesday 18 March 2026 02:40:51 +0000 (0:00:00.352) 0:08:48.091 ******* 2026-03-18 02:41:15.169306 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169315 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169325 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169334 | orchestrator | 2026-03-18 02:41:15.169349 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:41:15.169362 | orchestrator | Wednesday 18 March 2026 02:40:52 +0000 (0:00:00.372) 0:08:48.464 ******* 2026-03-18 02:41:15.169371 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169381 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169391 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169400 | orchestrator | 2026-03-18 02:41:15.169410 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:41:15.169419 | orchestrator | Wednesday 18 March 2026 02:40:53 +0000 (0:00:01.058) 0:08:49.522 ******* 2026-03-18 02:41:15.169429 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169438 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169448 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169457 | orchestrator | 2026-03-18 02:41:15.169487 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:41:15.169498 | orchestrator | Wednesday 18 March 2026 02:40:54 +0000 (0:00:00.774) 0:08:50.297 ******* 2026-03-18 02:41:15.169508 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169517 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169527 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169537 | orchestrator | 2026-03-18 02:41:15.169546 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:41:15.169556 | orchestrator | Wednesday 18 March 2026 02:40:54 +0000 (0:00:00.343) 0:08:50.641 ******* 2026-03-18 02:41:15.169565 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169575 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169584 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169594 | orchestrator | 2026-03-18 02:41:15.169603 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:41:15.169613 | orchestrator | Wednesday 18 March 2026 02:40:54 +0000 (0:00:00.359) 0:08:51.000 ******* 2026-03-18 02:41:15.169622 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169632 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169641 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169651 | orchestrator | 2026-03-18 02:41:15.169660 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:41:15.169670 | orchestrator | Wednesday 18 March 2026 02:40:55 +0000 (0:00:00.662) 0:08:51.663 ******* 2026-03-18 02:41:15.169679 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169689 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169698 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169708 | orchestrator | 2026-03-18 02:41:15.169744 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:41:15.169755 | orchestrator | Wednesday 18 March 2026 02:40:55 +0000 (0:00:00.366) 0:08:52.030 ******* 2026-03-18 02:41:15.169765 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.169774 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.169784 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.169793 | orchestrator | 2026-03-18 02:41:15.169802 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:41:15.169812 | orchestrator | Wednesday 18 March 2026 02:40:56 +0000 (0:00:00.366) 0:08:52.396 ******* 2026-03-18 02:41:15.169821 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169831 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169840 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169849 | orchestrator | 2026-03-18 02:41:15.169859 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:41:15.169868 | orchestrator | Wednesday 18 March 2026 02:40:56 +0000 (0:00:00.353) 0:08:52.749 ******* 2026-03-18 02:41:15.169878 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169887 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169896 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.169906 | orchestrator | 2026-03-18 02:41:15.169915 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:41:15.169924 | orchestrator | Wednesday 18 March 2026 02:40:57 +0000 (0:00:00.634) 0:08:53.384 ******* 2026-03-18 02:41:15.169941 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.169957 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.169994 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.170093 | orchestrator | 2026-03-18 02:41:15.170109 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:41:15.170119 | orchestrator | Wednesday 18 March 2026 02:40:57 +0000 (0:00:00.373) 0:08:53.757 ******* 2026-03-18 02:41:15.170128 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.170167 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.170180 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.170192 | orchestrator | 2026-03-18 02:41:15.170208 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:41:15.170222 | orchestrator | Wednesday 18 March 2026 02:40:57 +0000 (0:00:00.370) 0:08:54.127 ******* 2026-03-18 02:41:15.170236 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:15.170253 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:15.170269 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:15.170285 | orchestrator | 2026-03-18 02:41:15.170300 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-18 02:41:15.170315 | orchestrator | Wednesday 18 March 2026 02:40:58 +0000 (0:00:00.861) 0:08:54.989 ******* 2026-03-18 02:41:15.170331 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:15.170348 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:15.170364 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-18 02:41:15.170383 | orchestrator | 2026-03-18 02:41:15.170396 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-18 02:41:15.170406 | orchestrator | Wednesday 18 March 2026 02:40:59 +0000 (0:00:00.484) 0:08:55.473 ******* 2026-03-18 02:41:15.170415 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:41:15.170425 | orchestrator | 2026-03-18 02:41:15.170434 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-18 02:41:15.170443 | orchestrator | Wednesday 18 March 2026 02:41:01 +0000 (0:00:02.111) 0:08:57.584 ******* 2026-03-18 02:41:15.170455 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-18 02:41:15.170468 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:15.170478 | orchestrator | 2026-03-18 02:41:15.170487 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-18 02:41:15.170507 | orchestrator | Wednesday 18 March 2026 02:41:01 +0000 (0:00:00.237) 0:08:57.822 ******* 2026-03-18 02:41:15.170519 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:41:15.170544 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:41:15.170555 | orchestrator | 2026-03-18 02:41:15.170565 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-18 02:41:15.170574 | orchestrator | Wednesday 18 March 2026 02:41:09 +0000 (0:00:07.982) 0:09:05.804 ******* 2026-03-18 02:41:15.170584 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 02:41:15.170593 | orchestrator | 2026-03-18 02:41:15.170603 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-18 02:41:15.170613 | orchestrator | Wednesday 18 March 2026 02:41:13 +0000 (0:00:03.598) 0:09:09.402 ******* 2026-03-18 02:41:15.170622 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:15.170632 | orchestrator | 2026-03-18 02:41:15.170642 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-18 02:41:15.170651 | orchestrator | Wednesday 18 March 2026 02:41:14 +0000 (0:00:00.921) 0:09:10.324 ******* 2026-03-18 02:41:15.170661 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 02:41:15.170682 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 02:41:42.369837 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 02:41:42.369929 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-18 02:41:42.369941 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-18 02:41:42.369950 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-18 02:41:42.369959 | orchestrator | 2026-03-18 02:41:42.369968 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-18 02:41:42.369976 | orchestrator | Wednesday 18 March 2026 02:41:15 +0000 (0:00:01.084) 0:09:11.408 ******* 2026-03-18 02:41:42.369984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:41:42.369993 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:41:42.370073 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:41:42.370083 | orchestrator | 2026-03-18 02:41:42.370091 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-18 02:41:42.370099 | orchestrator | Wednesday 18 March 2026 02:41:17 +0000 (0:00:02.003) 0:09:13.412 ******* 2026-03-18 02:41:42.370107 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 02:41:42.370116 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:41:42.370123 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370155 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 02:41:42.370166 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 02:41:42.370174 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370182 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 02:41:42.370190 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 02:41:42.370198 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370206 | orchestrator | 2026-03-18 02:41:42.370214 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-18 02:41:42.370222 | orchestrator | Wednesday 18 March 2026 02:41:18 +0000 (0:00:01.118) 0:09:14.530 ******* 2026-03-18 02:41:42.370253 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370261 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370269 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370277 | orchestrator | 2026-03-18 02:41:42.370299 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-18 02:41:42.370307 | orchestrator | Wednesday 18 March 2026 02:41:21 +0000 (0:00:02.961) 0:09:17.492 ******* 2026-03-18 02:41:42.370315 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.370323 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:42.370331 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:42.370338 | orchestrator | 2026-03-18 02:41:42.370346 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-18 02:41:42.370354 | orchestrator | Wednesday 18 March 2026 02:41:21 +0000 (0:00:00.368) 0:09:17.860 ******* 2026-03-18 02:41:42.370362 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:42.370370 | orchestrator | 2026-03-18 02:41:42.370378 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-18 02:41:42.370386 | orchestrator | Wednesday 18 March 2026 02:41:22 +0000 (0:00:00.867) 0:09:18.728 ******* 2026-03-18 02:41:42.370395 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:42.370405 | orchestrator | 2026-03-18 02:41:42.370413 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-18 02:41:42.370422 | orchestrator | Wednesday 18 March 2026 02:41:23 +0000 (0:00:00.605) 0:09:19.333 ******* 2026-03-18 02:41:42.370431 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370440 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370449 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370457 | orchestrator | 2026-03-18 02:41:42.370466 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-18 02:41:42.370475 | orchestrator | Wednesday 18 March 2026 02:41:24 +0000 (0:00:01.268) 0:09:20.601 ******* 2026-03-18 02:41:42.370485 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370494 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370502 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370511 | orchestrator | 2026-03-18 02:41:42.370520 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-18 02:41:42.370548 | orchestrator | Wednesday 18 March 2026 02:41:25 +0000 (0:00:01.511) 0:09:22.113 ******* 2026-03-18 02:41:42.370564 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370576 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370589 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370602 | orchestrator | 2026-03-18 02:41:42.370615 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-18 02:41:42.370627 | orchestrator | Wednesday 18 March 2026 02:41:27 +0000 (0:00:01.838) 0:09:23.952 ******* 2026-03-18 02:41:42.370640 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370653 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370667 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370680 | orchestrator | 2026-03-18 02:41:42.370694 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-18 02:41:42.370709 | orchestrator | Wednesday 18 March 2026 02:41:29 +0000 (0:00:02.004) 0:09:25.956 ******* 2026-03-18 02:41:42.370723 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.370736 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.370752 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.370765 | orchestrator | 2026-03-18 02:41:42.370777 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:41:42.370786 | orchestrator | Wednesday 18 March 2026 02:41:31 +0000 (0:00:01.687) 0:09:27.644 ******* 2026-03-18 02:41:42.370793 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370801 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370818 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370826 | orchestrator | 2026-03-18 02:41:42.370850 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-18 02:41:42.370858 | orchestrator | Wednesday 18 March 2026 02:41:32 +0000 (0:00:00.714) 0:09:28.359 ******* 2026-03-18 02:41:42.370866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:42.370874 | orchestrator | 2026-03-18 02:41:42.370882 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-18 02:41:42.370890 | orchestrator | Wednesday 18 March 2026 02:41:32 +0000 (0:00:00.863) 0:09:29.222 ******* 2026-03-18 02:41:42.370898 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.370906 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.370913 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.370921 | orchestrator | 2026-03-18 02:41:42.370929 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-18 02:41:42.370937 | orchestrator | Wednesday 18 March 2026 02:41:33 +0000 (0:00:00.368) 0:09:29.591 ******* 2026-03-18 02:41:42.370945 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:41:42.370953 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:41:42.370960 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:41:42.370968 | orchestrator | 2026-03-18 02:41:42.370976 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-18 02:41:42.370984 | orchestrator | Wednesday 18 March 2026 02:41:34 +0000 (0:00:01.212) 0:09:30.803 ******* 2026-03-18 02:41:42.370992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:41:42.371000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:41:42.371008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:41:42.371016 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.371023 | orchestrator | 2026-03-18 02:41:42.371031 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-18 02:41:42.371039 | orchestrator | Wednesday 18 March 2026 02:41:35 +0000 (0:00:01.099) 0:09:31.903 ******* 2026-03-18 02:41:42.371047 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.371055 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.371062 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.371070 | orchestrator | 2026-03-18 02:41:42.371078 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-18 02:41:42.371086 | orchestrator | 2026-03-18 02:41:42.371094 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 02:41:42.371102 | orchestrator | Wednesday 18 March 2026 02:41:36 +0000 (0:00:00.988) 0:09:32.892 ******* 2026-03-18 02:41:42.371110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:42.371119 | orchestrator | 2026-03-18 02:41:42.371127 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 02:41:42.371150 | orchestrator | Wednesday 18 March 2026 02:41:37 +0000 (0:00:00.575) 0:09:33.467 ******* 2026-03-18 02:41:42.371158 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:41:42.371166 | orchestrator | 2026-03-18 02:41:42.371174 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 02:41:42.371182 | orchestrator | Wednesday 18 March 2026 02:41:38 +0000 (0:00:00.853) 0:09:34.321 ******* 2026-03-18 02:41:42.371189 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.371197 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:42.371205 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:42.371213 | orchestrator | 2026-03-18 02:41:42.371221 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 02:41:42.371229 | orchestrator | Wednesday 18 March 2026 02:41:38 +0000 (0:00:00.380) 0:09:34.702 ******* 2026-03-18 02:41:42.371242 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.371250 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.371258 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.371266 | orchestrator | 2026-03-18 02:41:42.371274 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 02:41:42.371282 | orchestrator | Wednesday 18 March 2026 02:41:39 +0000 (0:00:00.699) 0:09:35.401 ******* 2026-03-18 02:41:42.371290 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.371298 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.371307 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.371315 | orchestrator | 2026-03-18 02:41:42.371324 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 02:41:42.371333 | orchestrator | Wednesday 18 March 2026 02:41:40 +0000 (0:00:01.061) 0:09:36.463 ******* 2026-03-18 02:41:42.371347 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:41:42.371356 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:41:42.371364 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:41:42.371373 | orchestrator | 2026-03-18 02:41:42.371382 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 02:41:42.371390 | orchestrator | Wednesday 18 March 2026 02:41:40 +0000 (0:00:00.762) 0:09:37.225 ******* 2026-03-18 02:41:42.371399 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.371408 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:42.371417 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:42.371425 | orchestrator | 2026-03-18 02:41:42.371434 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 02:41:42.371443 | orchestrator | Wednesday 18 March 2026 02:41:41 +0000 (0:00:00.365) 0:09:37.590 ******* 2026-03-18 02:41:42.371451 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.371460 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:42.371469 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:42.371477 | orchestrator | 2026-03-18 02:41:42.371486 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 02:41:42.371495 | orchestrator | Wednesday 18 March 2026 02:41:41 +0000 (0:00:00.344) 0:09:37.934 ******* 2026-03-18 02:41:42.371503 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:41:42.371512 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:41:42.371520 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:41:42.371529 | orchestrator | 2026-03-18 02:41:42.371538 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 02:41:42.371552 | orchestrator | Wednesday 18 March 2026 02:41:42 +0000 (0:00:00.673) 0:09:38.608 ******* 2026-03-18 02:42:04.798614 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.798732 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.798748 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.798761 | orchestrator | 2026-03-18 02:42:04.798774 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 02:42:04.798786 | orchestrator | Wednesday 18 March 2026 02:41:43 +0000 (0:00:00.768) 0:09:39.377 ******* 2026-03-18 02:42:04.798797 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.798808 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.798819 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.798830 | orchestrator | 2026-03-18 02:42:04.798841 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 02:42:04.798852 | orchestrator | Wednesday 18 March 2026 02:41:43 +0000 (0:00:00.788) 0:09:40.166 ******* 2026-03-18 02:42:04.798877 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.798890 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.798901 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.798911 | orchestrator | 2026-03-18 02:42:04.798922 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 02:42:04.798933 | orchestrator | Wednesday 18 March 2026 02:41:44 +0000 (0:00:00.352) 0:09:40.518 ******* 2026-03-18 02:42:04.798944 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.798955 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.798990 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.799001 | orchestrator | 2026-03-18 02:42:04.799013 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 02:42:04.799025 | orchestrator | Wednesday 18 March 2026 02:41:44 +0000 (0:00:00.641) 0:09:41.160 ******* 2026-03-18 02:42:04.799036 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.799046 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.799057 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.799067 | orchestrator | 2026-03-18 02:42:04.799078 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 02:42:04.799089 | orchestrator | Wednesday 18 March 2026 02:41:45 +0000 (0:00:00.393) 0:09:41.554 ******* 2026-03-18 02:42:04.799100 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.799110 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.799121 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.799200 | orchestrator | 2026-03-18 02:42:04.799225 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 02:42:04.799245 | orchestrator | Wednesday 18 March 2026 02:41:45 +0000 (0:00:00.370) 0:09:41.924 ******* 2026-03-18 02:42:04.799263 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.799276 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.799289 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.799301 | orchestrator | 2026-03-18 02:42:04.799313 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 02:42:04.799325 | orchestrator | Wednesday 18 March 2026 02:41:46 +0000 (0:00:00.370) 0:09:42.294 ******* 2026-03-18 02:42:04.799338 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.799351 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.799363 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.799375 | orchestrator | 2026-03-18 02:42:04.799387 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 02:42:04.799400 | orchestrator | Wednesday 18 March 2026 02:41:46 +0000 (0:00:00.628) 0:09:42.922 ******* 2026-03-18 02:42:04.799412 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.799425 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.799437 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.799448 | orchestrator | 2026-03-18 02:42:04.799458 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 02:42:04.799469 | orchestrator | Wednesday 18 March 2026 02:41:47 +0000 (0:00:00.423) 0:09:43.346 ******* 2026-03-18 02:42:04.799480 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.799490 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.799501 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.799511 | orchestrator | 2026-03-18 02:42:04.799522 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 02:42:04.799533 | orchestrator | Wednesday 18 March 2026 02:41:47 +0000 (0:00:00.393) 0:09:43.740 ******* 2026-03-18 02:42:04.799543 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.799554 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.799565 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.799575 | orchestrator | 2026-03-18 02:42:04.799586 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 02:42:04.799597 | orchestrator | Wednesday 18 March 2026 02:41:47 +0000 (0:00:00.368) 0:09:44.109 ******* 2026-03-18 02:42:04.799607 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:04.799618 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:04.799643 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:04.799655 | orchestrator | 2026-03-18 02:42:04.799666 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-18 02:42:04.799677 | orchestrator | Wednesday 18 March 2026 02:41:48 +0000 (0:00:00.906) 0:09:45.015 ******* 2026-03-18 02:42:04.799688 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:42:04.799700 | orchestrator | 2026-03-18 02:42:04.799711 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 02:42:04.799732 | orchestrator | Wednesday 18 March 2026 02:41:49 +0000 (0:00:00.603) 0:09:45.618 ******* 2026-03-18 02:42:04.799743 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.799754 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:42:04.799765 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:42:04.799776 | orchestrator | 2026-03-18 02:42:04.799787 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 02:42:04.799797 | orchestrator | Wednesday 18 March 2026 02:41:51 +0000 (0:00:02.488) 0:09:48.107 ******* 2026-03-18 02:42:04.799808 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 02:42:04.799819 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 02:42:04.799830 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:42:04.799841 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 02:42:04.799852 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 02:42:04.799881 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:42:04.799893 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 02:42:04.799903 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 02:42:04.799914 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:42:04.799925 | orchestrator | 2026-03-18 02:42:04.799935 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-18 02:42:04.799946 | orchestrator | Wednesday 18 March 2026 02:41:53 +0000 (0:00:01.567) 0:09:49.674 ******* 2026-03-18 02:42:04.799957 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:04.799968 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:04.799978 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:04.799989 | orchestrator | 2026-03-18 02:42:04.800000 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-18 02:42:04.800010 | orchestrator | Wednesday 18 March 2026 02:41:53 +0000 (0:00:00.359) 0:09:50.034 ******* 2026-03-18 02:42:04.800021 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:42:04.800032 | orchestrator | 2026-03-18 02:42:04.800043 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-18 02:42:04.800053 | orchestrator | Wednesday 18 March 2026 02:41:54 +0000 (0:00:00.583) 0:09:50.617 ******* 2026-03-18 02:42:04.800065 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:04.800077 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:04.800089 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:04.800099 | orchestrator | 2026-03-18 02:42:04.800110 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-18 02:42:04.800121 | orchestrator | Wednesday 18 March 2026 02:41:55 +0000 (0:00:01.207) 0:09:51.825 ******* 2026-03-18 02:42:04.800153 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800167 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 02:42:04.800178 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800189 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 02:42:04.800200 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800210 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 02:42:04.800228 | orchestrator | 2026-03-18 02:42:04.800239 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 02:42:04.800250 | orchestrator | Wednesday 18 March 2026 02:42:00 +0000 (0:00:04.589) 0:09:56.415 ******* 2026-03-18 02:42:04.800260 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800271 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:42:04.800290 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800306 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:42:04.800324 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:42:04.800342 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:42:04.800359 | orchestrator | 2026-03-18 02:42:04.800377 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 02:42:04.800401 | orchestrator | Wednesday 18 March 2026 02:42:02 +0000 (0:00:02.242) 0:09:58.657 ******* 2026-03-18 02:42:04.800418 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 02:42:04.800435 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:42:04.800450 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 02:42:04.800467 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:42:04.800484 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 02:42:04.800500 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:42:04.800517 | orchestrator | 2026-03-18 02:42:04.800535 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-18 02:42:04.800552 | orchestrator | Wednesday 18 March 2026 02:42:03 +0000 (0:00:01.468) 0:10:00.126 ******* 2026-03-18 02:42:04.800570 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-18 02:42:04.800587 | orchestrator | 2026-03-18 02:42:04.800604 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-18 02:42:04.800621 | orchestrator | Wednesday 18 March 2026 02:42:04 +0000 (0:00:00.246) 0:10:00.372 ******* 2026-03-18 02:42:04.800640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:04.800657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:04.800689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865462 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865473 | orchestrator | 2026-03-18 02:42:48.865483 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-18 02:42:48.865494 | orchestrator | Wednesday 18 March 2026 02:42:04 +0000 (0:00:00.666) 0:10:01.039 ******* 2026-03-18 02:42:48.865502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 02:42:48.865566 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865575 | orchestrator | 2026-03-18 02:42:48.865584 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-18 02:42:48.865592 | orchestrator | Wednesday 18 March 2026 02:42:05 +0000 (0:00:00.685) 0:10:01.724 ******* 2026-03-18 02:42:48.865601 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 02:42:48.865609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 02:42:48.865614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 02:42:48.865619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 02:42:48.865625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 02:42:48.865629 | orchestrator | 2026-03-18 02:42:48.865634 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-18 02:42:48.865639 | orchestrator | Wednesday 18 March 2026 02:42:35 +0000 (0:00:30.105) 0:10:31.830 ******* 2026-03-18 02:42:48.865644 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865649 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:48.865653 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:48.865658 | orchestrator | 2026-03-18 02:42:48.865663 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-18 02:42:48.865668 | orchestrator | Wednesday 18 March 2026 02:42:35 +0000 (0:00:00.362) 0:10:32.192 ******* 2026-03-18 02:42:48.865672 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865677 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:48.865682 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:48.865686 | orchestrator | 2026-03-18 02:42:48.865691 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-18 02:42:48.865696 | orchestrator | Wednesday 18 March 2026 02:42:36 +0000 (0:00:00.353) 0:10:32.546 ******* 2026-03-18 02:42:48.865714 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:42:48.865719 | orchestrator | 2026-03-18 02:42:48.865724 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-18 02:42:48.865728 | orchestrator | Wednesday 18 March 2026 02:42:37 +0000 (0:00:00.941) 0:10:33.488 ******* 2026-03-18 02:42:48.865733 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:42:48.865739 | orchestrator | 2026-03-18 02:42:48.865743 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-18 02:42:48.865748 | orchestrator | Wednesday 18 March 2026 02:42:37 +0000 (0:00:00.573) 0:10:34.061 ******* 2026-03-18 02:42:48.865753 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:42:48.865758 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:42:48.865763 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:42:48.865768 | orchestrator | 2026-03-18 02:42:48.865773 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-18 02:42:48.865778 | orchestrator | Wednesday 18 March 2026 02:42:39 +0000 (0:00:01.684) 0:10:35.746 ******* 2026-03-18 02:42:48.865783 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:42:48.865788 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:42:48.865792 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:42:48.865797 | orchestrator | 2026-03-18 02:42:48.865802 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-18 02:42:48.865812 | orchestrator | Wednesday 18 March 2026 02:42:40 +0000 (0:00:01.151) 0:10:36.897 ******* 2026-03-18 02:42:48.865817 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:42:48.865835 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:42:48.865840 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:42:48.865845 | orchestrator | 2026-03-18 02:42:48.865850 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-18 02:42:48.865855 | orchestrator | Wednesday 18 March 2026 02:42:42 +0000 (0:00:01.746) 0:10:38.644 ******* 2026-03-18 02:42:48.865860 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:48.865865 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:48.865870 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 02:42:48.865875 | orchestrator | 2026-03-18 02:42:48.865879 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-18 02:42:48.865884 | orchestrator | Wednesday 18 March 2026 02:42:45 +0000 (0:00:02.814) 0:10:41.459 ******* 2026-03-18 02:42:48.865889 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865894 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:48.865898 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:48.865903 | orchestrator | 2026-03-18 02:42:48.865908 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-18 02:42:48.865913 | orchestrator | Wednesday 18 March 2026 02:42:45 +0000 (0:00:00.405) 0:10:41.864 ******* 2026-03-18 02:42:48.865918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:42:48.865923 | orchestrator | 2026-03-18 02:42:48.865928 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-18 02:42:48.865934 | orchestrator | Wednesday 18 March 2026 02:42:46 +0000 (0:00:00.896) 0:10:42.760 ******* 2026-03-18 02:42:48.865940 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:48.865946 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:48.865952 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:48.865957 | orchestrator | 2026-03-18 02:42:48.865963 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-18 02:42:48.865969 | orchestrator | Wednesday 18 March 2026 02:42:46 +0000 (0:00:00.369) 0:10:43.130 ******* 2026-03-18 02:42:48.865974 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.865979 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:42:48.865985 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:42:48.865990 | orchestrator | 2026-03-18 02:42:48.865996 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-18 02:42:48.866001 | orchestrator | Wednesday 18 March 2026 02:42:47 +0000 (0:00:00.380) 0:10:43.510 ******* 2026-03-18 02:42:48.866007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:42:48.866059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:42:48.866066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:42:48.866072 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:42:48.866077 | orchestrator | 2026-03-18 02:42:48.866083 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-18 02:42:48.866089 | orchestrator | Wednesday 18 March 2026 02:42:48 +0000 (0:00:00.988) 0:10:44.499 ******* 2026-03-18 02:42:48.866094 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:42:48.866100 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:42:48.866105 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:42:48.866110 | orchestrator | 2026-03-18 02:42:48.866114 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:42:48.866119 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-18 02:42:48.866147 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-18 02:42:48.866156 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-18 02:42:48.866168 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-18 02:42:48.866176 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-18 02:42:48.866184 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-18 02:42:48.866191 | orchestrator | 2026-03-18 02:42:48.866199 | orchestrator | 2026-03-18 02:42:48.866207 | orchestrator | 2026-03-18 02:42:48.866215 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:42:48.866223 | orchestrator | Wednesday 18 March 2026 02:42:48 +0000 (0:00:00.592) 0:10:45.092 ******* 2026-03-18 02:42:48.866230 | orchestrator | =============================================================================== 2026-03-18 02:42:48.866238 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 62.63s 2026-03-18 02:42:48.866246 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.95s 2026-03-18 02:42:48.866254 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.11s 2026-03-18 02:42:48.866261 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.35s 2026-03-18 02:42:48.866269 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.44s 2026-03-18 02:42:48.866284 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.49s 2026-03-18 02:42:49.364813 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.31s 2026-03-18 02:42:49.364918 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.13s 2026-03-18 02:42:49.364933 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.98s 2026-03-18 02:42:49.364945 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.53s 2026-03-18 02:42:49.364956 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.30s 2026-03-18 02:42:49.364966 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.08s 2026-03-18 02:42:49.364977 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.59s 2026-03-18 02:42:49.364987 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.12s 2026-03-18 02:42:49.365017 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.79s 2026-03-18 02:42:49.365029 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.64s 2026-03-18 02:42:49.365039 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.60s 2026-03-18 02:42:49.365051 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.33s 2026-03-18 02:42:49.365063 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.14s 2026-03-18 02:42:49.365073 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.03s 2026-03-18 02:42:52.084009 | orchestrator | 2026-03-18 02:42:52 | INFO  | Task ab943649-fb79-4147-8f28-9109b090e333 (ceph-pools) was prepared for execution. 2026-03-18 02:42:52.084088 | orchestrator | 2026-03-18 02:42:52 | INFO  | It takes a moment until task ab943649-fb79-4147-8f28-9109b090e333 (ceph-pools) has been started and output is visible here. 2026-03-18 02:43:07.428288 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 02:43:07.428409 | orchestrator | 2.16.14 2026-03-18 02:43:07.428421 | orchestrator | 2026-03-18 02:43:07.428429 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-18 02:43:07.428436 | orchestrator | 2026-03-18 02:43:07.428443 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 02:43:07.428494 | orchestrator | Wednesday 18 March 2026 02:42:56 +0000 (0:00:00.650) 0:00:00.650 ******* 2026-03-18 02:43:07.428503 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:43:07.428511 | orchestrator | 2026-03-18 02:43:07.428518 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 02:43:07.428524 | orchestrator | Wednesday 18 March 2026 02:42:57 +0000 (0:00:00.719) 0:00:01.370 ******* 2026-03-18 02:43:07.428531 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428538 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428544 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428550 | orchestrator | 2026-03-18 02:43:07.428557 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 02:43:07.428564 | orchestrator | Wednesday 18 March 2026 02:42:58 +0000 (0:00:00.698) 0:00:02.068 ******* 2026-03-18 02:43:07.428570 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428577 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428583 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428589 | orchestrator | 2026-03-18 02:43:07.428595 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 02:43:07.428602 | orchestrator | Wednesday 18 March 2026 02:42:58 +0000 (0:00:00.318) 0:00:02.387 ******* 2026-03-18 02:43:07.428608 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428614 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428621 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428627 | orchestrator | 2026-03-18 02:43:07.428633 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 02:43:07.428640 | orchestrator | Wednesday 18 March 2026 02:42:59 +0000 (0:00:00.872) 0:00:03.260 ******* 2026-03-18 02:43:07.428646 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428652 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428671 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428678 | orchestrator | 2026-03-18 02:43:07.428684 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 02:43:07.428709 | orchestrator | Wednesday 18 March 2026 02:42:59 +0000 (0:00:00.331) 0:00:03.592 ******* 2026-03-18 02:43:07.428716 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428722 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428740 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428755 | orchestrator | 2026-03-18 02:43:07.428762 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 02:43:07.428768 | orchestrator | Wednesday 18 March 2026 02:43:00 +0000 (0:00:00.369) 0:00:03.961 ******* 2026-03-18 02:43:07.428774 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428780 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428786 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428793 | orchestrator | 2026-03-18 02:43:07.428799 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 02:43:07.428805 | orchestrator | Wednesday 18 March 2026 02:43:00 +0000 (0:00:00.351) 0:00:04.312 ******* 2026-03-18 02:43:07.428812 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:07.428820 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:07.428826 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:07.428832 | orchestrator | 2026-03-18 02:43:07.428839 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 02:43:07.428845 | orchestrator | Wednesday 18 March 2026 02:43:01 +0000 (0:00:00.607) 0:00:04.920 ******* 2026-03-18 02:43:07.428851 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428857 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428869 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428876 | orchestrator | 2026-03-18 02:43:07.428882 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 02:43:07.428888 | orchestrator | Wednesday 18 March 2026 02:43:01 +0000 (0:00:00.365) 0:00:05.286 ******* 2026-03-18 02:43:07.428894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:43:07.428901 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:43:07.428907 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:43:07.428913 | orchestrator | 2026-03-18 02:43:07.428919 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 02:43:07.428925 | orchestrator | Wednesday 18 March 2026 02:43:02 +0000 (0:00:00.768) 0:00:06.055 ******* 2026-03-18 02:43:07.428932 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:07.428938 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:07.428944 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:07.428950 | orchestrator | 2026-03-18 02:43:07.428957 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 02:43:07.428963 | orchestrator | Wednesday 18 March 2026 02:43:02 +0000 (0:00:00.494) 0:00:06.549 ******* 2026-03-18 02:43:07.428969 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:43:07.428975 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:43:07.428981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:43:07.428987 | orchestrator | 2026-03-18 02:43:07.428994 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 02:43:07.429000 | orchestrator | Wednesday 18 March 2026 02:43:05 +0000 (0:00:02.271) 0:00:08.821 ******* 2026-03-18 02:43:07.429006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 02:43:07.429013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 02:43:07.429020 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 02:43:07.429026 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:07.429032 | orchestrator | 2026-03-18 02:43:07.429051 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 02:43:07.429058 | orchestrator | Wednesday 18 March 2026 02:43:05 +0000 (0:00:00.764) 0:00:09.585 ******* 2026-03-18 02:43:07.429066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429075 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429088 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:07.429094 | orchestrator | 2026-03-18 02:43:07.429100 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 02:43:07.429106 | orchestrator | Wednesday 18 March 2026 02:43:06 +0000 (0:00:01.170) 0:00:10.756 ******* 2026-03-18 02:43:07.429119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:07.429187 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:07.429195 | orchestrator | 2026-03-18 02:43:07.429202 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 02:43:07.429208 | orchestrator | Wednesday 18 March 2026 02:43:07 +0000 (0:00:00.207) 0:00:10.963 ******* 2026-03-18 02:43:07.429253 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'dfaa0207b10e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 02:43:03.721286', 'end': '2026-03-18 02:43:03.759219', 'delta': '0:00:00.037933', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dfaa0207b10e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 02:43:07.429263 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1edfdf2d0145', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 02:43:04.281728', 'end': '2026-03-18 02:43:04.321219', 'delta': '0:00:00.039491', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1edfdf2d0145'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 02:43:07.429276 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'fc8e238828f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 02:43:04.860168', 'end': '2026-03-18 02:43:04.908946', 'delta': '0:00:00.048778', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc8e238828f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 02:43:14.818567 | orchestrator | 2026-03-18 02:43:14.818656 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 02:43:14.818668 | orchestrator | Wednesday 18 March 2026 02:43:07 +0000 (0:00:00.220) 0:00:11.183 ******* 2026-03-18 02:43:14.818677 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:14.818685 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:14.818692 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:14.818699 | orchestrator | 2026-03-18 02:43:14.818724 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 02:43:14.818732 | orchestrator | Wednesday 18 March 2026 02:43:07 +0000 (0:00:00.469) 0:00:11.653 ******* 2026-03-18 02:43:14.818739 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-18 02:43:14.818747 | orchestrator | 2026-03-18 02:43:14.818754 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 02:43:14.818761 | orchestrator | Wednesday 18 March 2026 02:43:09 +0000 (0:00:01.699) 0:00:13.352 ******* 2026-03-18 02:43:14.818767 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.818774 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.818796 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.818809 | orchestrator | 2026-03-18 02:43:14.818821 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 02:43:14.818834 | orchestrator | Wednesday 18 March 2026 02:43:09 +0000 (0:00:00.358) 0:00:13.710 ******* 2026-03-18 02:43:14.818851 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.818861 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.818873 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.818885 | orchestrator | 2026-03-18 02:43:14.818897 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 02:43:14.818908 | orchestrator | Wednesday 18 March 2026 02:43:10 +0000 (0:00:00.977) 0:00:14.687 ******* 2026-03-18 02:43:14.818920 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.818931 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.818943 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.818955 | orchestrator | 2026-03-18 02:43:14.818966 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 02:43:14.818979 | orchestrator | Wednesday 18 March 2026 02:43:11 +0000 (0:00:00.325) 0:00:15.013 ******* 2026-03-18 02:43:14.818991 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:14.819001 | orchestrator | 2026-03-18 02:43:14.819012 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 02:43:14.819024 | orchestrator | Wednesday 18 March 2026 02:43:11 +0000 (0:00:00.129) 0:00:15.142 ******* 2026-03-18 02:43:14.819035 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819047 | orchestrator | 2026-03-18 02:43:14.819059 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 02:43:14.819070 | orchestrator | Wednesday 18 March 2026 02:43:11 +0000 (0:00:00.242) 0:00:15.384 ******* 2026-03-18 02:43:14.819082 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819095 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819107 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819159 | orchestrator | 2026-03-18 02:43:14.819167 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 02:43:14.819175 | orchestrator | Wednesday 18 March 2026 02:43:11 +0000 (0:00:00.311) 0:00:15.696 ******* 2026-03-18 02:43:14.819183 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819190 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819206 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819213 | orchestrator | 2026-03-18 02:43:14.819221 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 02:43:14.819228 | orchestrator | Wednesday 18 March 2026 02:43:12 +0000 (0:00:00.337) 0:00:16.033 ******* 2026-03-18 02:43:14.819236 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819244 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819251 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819258 | orchestrator | 2026-03-18 02:43:14.819266 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 02:43:14.819274 | orchestrator | Wednesday 18 March 2026 02:43:12 +0000 (0:00:00.600) 0:00:16.634 ******* 2026-03-18 02:43:14.819281 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819288 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819296 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819313 | orchestrator | 2026-03-18 02:43:14.819320 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 02:43:14.819329 | orchestrator | Wednesday 18 March 2026 02:43:13 +0000 (0:00:00.372) 0:00:17.007 ******* 2026-03-18 02:43:14.819336 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819343 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819351 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819358 | orchestrator | 2026-03-18 02:43:14.819366 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 02:43:14.819374 | orchestrator | Wednesday 18 March 2026 02:43:13 +0000 (0:00:00.371) 0:00:17.379 ******* 2026-03-18 02:43:14.819381 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819388 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819396 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819403 | orchestrator | 2026-03-18 02:43:14.819411 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 02:43:14.819419 | orchestrator | Wednesday 18 March 2026 02:43:14 +0000 (0:00:00.579) 0:00:17.959 ******* 2026-03-18 02:43:14.819426 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:14.819434 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:14.819441 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:14.819448 | orchestrator | 2026-03-18 02:43:14.819455 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 02:43:14.819463 | orchestrator | Wednesday 18 March 2026 02:43:14 +0000 (0:00:00.377) 0:00:18.336 ******* 2026-03-18 02:43:14.819490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819567 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.819584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:14.974823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:14.974841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:14.974866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:14.974884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:14.974907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:14.974973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.183797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.183926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.183951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.183993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.184032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.184059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.184073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.184096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.184112 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:15.184157 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:15.184174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.184191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.184206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.184233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.427816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.427961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.427994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.428003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.428010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.428017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-18 02:43:15.428048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.428060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.428076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.428084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.428093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-18 02:43:15.428101 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:15.428111 | orchestrator | 2026-03-18 02:43:15.428118 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 02:43:15.428174 | orchestrator | Wednesday 18 March 2026 02:43:15 +0000 (0:00:00.730) 0:00:19.066 ******* 2026-03-18 02:43:15.428190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541887 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541910 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.541963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644823 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644929 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:15.644938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.644970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767886 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767897 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.767948 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.958890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959012 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:15.959030 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959110 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959122 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:15.959282 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:27.875901 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:27.876027 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-18-01-18-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-18 02:43:27.876087 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.876111 | orchestrator | 2026-03-18 02:43:27.876194 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 02:43:27.876209 | orchestrator | Wednesday 18 March 2026 02:43:15 +0000 (0:00:00.645) 0:00:19.712 ******* 2026-03-18 02:43:27.876220 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:27.876232 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:27.876243 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:27.876253 | orchestrator | 2026-03-18 02:43:27.876265 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 02:43:27.876276 | orchestrator | Wednesday 18 March 2026 02:43:16 +0000 (0:00:00.983) 0:00:20.695 ******* 2026-03-18 02:43:27.876286 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:27.876297 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:27.876308 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:27.876318 | orchestrator | 2026-03-18 02:43:27.876329 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 02:43:27.876340 | orchestrator | Wednesday 18 March 2026 02:43:17 +0000 (0:00:00.329) 0:00:21.024 ******* 2026-03-18 02:43:27.876351 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:27.876362 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:27.876372 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:27.876384 | orchestrator | 2026-03-18 02:43:27.876395 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 02:43:27.876406 | orchestrator | Wednesday 18 March 2026 02:43:18 +0000 (0:00:01.587) 0:00:22.612 ******* 2026-03-18 02:43:27.876417 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.876428 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.876455 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.876468 | orchestrator | 2026-03-18 02:43:27.876480 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 02:43:27.876493 | orchestrator | Wednesday 18 March 2026 02:43:19 +0000 (0:00:00.317) 0:00:22.929 ******* 2026-03-18 02:43:27.876504 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.876516 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.876529 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.876540 | orchestrator | 2026-03-18 02:43:27.876553 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 02:43:27.876565 | orchestrator | Wednesday 18 March 2026 02:43:19 +0000 (0:00:00.777) 0:00:23.706 ******* 2026-03-18 02:43:27.876577 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.876589 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.876602 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.876613 | orchestrator | 2026-03-18 02:43:27.876626 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 02:43:27.876639 | orchestrator | Wednesday 18 March 2026 02:43:20 +0000 (0:00:00.375) 0:00:24.082 ******* 2026-03-18 02:43:27.876651 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 02:43:27.876663 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 02:43:27.876676 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 02:43:27.876688 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 02:43:27.876700 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 02:43:27.876712 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 02:43:27.876724 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 02:43:27.876737 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 02:43:27.876749 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 02:43:27.876761 | orchestrator | 2026-03-18 02:43:27.876773 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 02:43:27.876796 | orchestrator | Wednesday 18 March 2026 02:43:21 +0000 (0:00:01.096) 0:00:25.179 ******* 2026-03-18 02:43:27.876830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 02:43:27.876842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 02:43:27.876853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 02:43:27.876863 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.876874 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 02:43:27.876885 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 02:43:27.876896 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 02:43:27.876907 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.876917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 02:43:27.876928 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 02:43:27.876938 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 02:43:27.876949 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.876960 | orchestrator | 2026-03-18 02:43:27.876971 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 02:43:27.876982 | orchestrator | Wednesday 18 March 2026 02:43:21 +0000 (0:00:00.427) 0:00:25.607 ******* 2026-03-18 02:43:27.876993 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:43:27.877004 | orchestrator | 2026-03-18 02:43:27.877015 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 02:43:27.877027 | orchestrator | Wednesday 18 March 2026 02:43:22 +0000 (0:00:00.871) 0:00:26.478 ******* 2026-03-18 02:43:27.877038 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877049 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.877059 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.877070 | orchestrator | 2026-03-18 02:43:27.877081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 02:43:27.877091 | orchestrator | Wednesday 18 March 2026 02:43:23 +0000 (0:00:00.359) 0:00:26.838 ******* 2026-03-18 02:43:27.877102 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877113 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.877123 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.877162 | orchestrator | 2026-03-18 02:43:27.877173 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 02:43:27.877184 | orchestrator | Wednesday 18 March 2026 02:43:23 +0000 (0:00:00.370) 0:00:27.208 ******* 2026-03-18 02:43:27.877194 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877205 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:43:27.877216 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:43:27.877226 | orchestrator | 2026-03-18 02:43:27.877237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 02:43:27.877248 | orchestrator | Wednesday 18 March 2026 02:43:24 +0000 (0:00:00.576) 0:00:27.785 ******* 2026-03-18 02:43:27.877258 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:27.877269 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:27.877279 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:27.877290 | orchestrator | 2026-03-18 02:43:27.877301 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 02:43:27.877312 | orchestrator | Wednesday 18 March 2026 02:43:24 +0000 (0:00:00.434) 0:00:28.219 ******* 2026-03-18 02:43:27.877323 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:43:27.877334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:43:27.877344 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:43:27.877355 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877374 | orchestrator | 2026-03-18 02:43:27.877385 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 02:43:27.877396 | orchestrator | Wednesday 18 March 2026 02:43:24 +0000 (0:00:00.409) 0:00:28.628 ******* 2026-03-18 02:43:27.877412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:43:27.877423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:43:27.877434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:43:27.877444 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877455 | orchestrator | 2026-03-18 02:43:27.877466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 02:43:27.877477 | orchestrator | Wednesday 18 March 2026 02:43:25 +0000 (0:00:00.415) 0:00:29.044 ******* 2026-03-18 02:43:27.877488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 02:43:27.877498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 02:43:27.877509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 02:43:27.877519 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:43:27.877530 | orchestrator | 2026-03-18 02:43:27.877541 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 02:43:27.877552 | orchestrator | Wednesday 18 March 2026 02:43:25 +0000 (0:00:00.397) 0:00:29.442 ******* 2026-03-18 02:43:27.877562 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:43:27.877573 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:43:27.877589 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:43:27.877607 | orchestrator | 2026-03-18 02:43:27.877625 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 02:43:27.877644 | orchestrator | Wednesday 18 March 2026 02:43:26 +0000 (0:00:00.353) 0:00:29.795 ******* 2026-03-18 02:43:27.877662 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 02:43:27.877680 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 02:43:27.877691 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 02:43:27.877702 | orchestrator | 2026-03-18 02:43:27.877712 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 02:43:27.877723 | orchestrator | Wednesday 18 March 2026 02:43:26 +0000 (0:00:00.919) 0:00:30.715 ******* 2026-03-18 02:43:27.877734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:43:27.877802 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:45:06.062448 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:45:06.062573 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 02:45:06.062592 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 02:45:06.062605 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 02:45:06.062616 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 02:45:06.062628 | orchestrator | 2026-03-18 02:45:06.062640 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 02:45:06.062652 | orchestrator | Wednesday 18 March 2026 02:43:27 +0000 (0:00:00.910) 0:00:31.626 ******* 2026-03-18 02:45:06.062663 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 02:45:06.062674 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 02:45:06.062684 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 02:45:06.062695 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 02:45:06.062706 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 02:45:06.062717 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 02:45:06.062728 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 02:45:06.062763 | orchestrator | 2026-03-18 02:45:06.062775 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-18 02:45:06.062786 | orchestrator | Wednesday 18 March 2026 02:43:29 +0000 (0:00:01.791) 0:00:33.418 ******* 2026-03-18 02:45:06.062796 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:45:06.062809 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:45:06.062820 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-18 02:45:06.062831 | orchestrator | 2026-03-18 02:45:06.062841 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-18 02:45:06.062852 | orchestrator | Wednesday 18 March 2026 02:43:30 +0000 (0:00:00.405) 0:00:33.823 ******* 2026-03-18 02:45:06.062865 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:45:06.062879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:45:06.062892 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:45:06.062932 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:45:06.062953 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-18 02:45:06.062972 | orchestrator | 2026-03-18 02:45:06.062990 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-18 02:45:06.063010 | orchestrator | Wednesday 18 March 2026 02:44:13 +0000 (0:00:43.677) 0:01:17.501 ******* 2026-03-18 02:45:06.063031 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063071 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063096 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063109 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-18 02:45:06.063151 | orchestrator | 2026-03-18 02:45:06.063165 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-18 02:45:06.063178 | orchestrator | Wednesday 18 March 2026 02:44:37 +0000 (0:00:23.387) 0:01:40.889 ******* 2026-03-18 02:45:06.063210 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063223 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063233 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063244 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063266 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063277 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063287 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 02:45:06.063298 | orchestrator | 2026-03-18 02:45:06.063308 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-18 02:45:06.063319 | orchestrator | Wednesday 18 March 2026 02:44:48 +0000 (0:00:11.603) 0:01:52.492 ******* 2026-03-18 02:45:06.063330 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063341 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063352 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063362 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063373 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063384 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063395 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063405 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063416 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063426 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063437 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063453 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063473 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063493 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063513 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063532 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 02:45:06.063552 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 02:45:06.063570 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 02:45:06.063589 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-18 02:45:06.063609 | orchestrator | 2026-03-18 02:45:06.063629 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:45:06.063648 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-18 02:45:06.063677 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-18 02:45:06.063696 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-18 02:45:06.063717 | orchestrator | 2026-03-18 02:45:06.063737 | orchestrator | 2026-03-18 02:45:06.063758 | orchestrator | 2026-03-18 02:45:06.063778 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:45:06.063799 | orchestrator | Wednesday 18 March 2026 02:45:06 +0000 (0:00:17.299) 0:02:09.791 ******* 2026-03-18 02:45:06.063818 | orchestrator | =============================================================================== 2026-03-18 02:45:06.063839 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.68s 2026-03-18 02:45:06.063859 | orchestrator | generate keys ---------------------------------------------------------- 23.39s 2026-03-18 02:45:06.063882 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.30s 2026-03-18 02:45:06.063892 | orchestrator | get keys from monitors ------------------------------------------------- 11.60s 2026-03-18 02:45:06.063903 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-03-18 02:45:06.063914 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.79s 2026-03-18 02:45:06.063924 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2026-03-18 02:45:06.063935 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.59s 2026-03-18 02:45:06.063946 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.17s 2026-03-18 02:45:06.063956 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.10s 2026-03-18 02:45:06.063967 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.98s 2026-03-18 02:45:06.063978 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.98s 2026-03-18 02:45:06.063989 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.92s 2026-03-18 02:45:06.064011 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.91s 2026-03-18 02:45:06.451960 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.87s 2026-03-18 02:45:06.452075 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.87s 2026-03-18 02:45:06.452096 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.78s 2026-03-18 02:45:06.452110 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.77s 2026-03-18 02:45:06.452187 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.76s 2026-03-18 02:45:06.452202 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.73s 2026-03-18 02:45:09.060008 | orchestrator | 2026-03-18 02:45:09 | INFO  | Task 60f7aea5-14c4-43f2-bb95-a8a05feddd23 (copy-ceph-keys) was prepared for execution. 2026-03-18 02:45:09.060078 | orchestrator | 2026-03-18 02:45:09 | INFO  | It takes a moment until task 60f7aea5-14c4-43f2-bb95-a8a05feddd23 (copy-ceph-keys) has been started and output is visible here. 2026-03-18 02:45:48.891200 | orchestrator | 2026-03-18 02:45:48.891294 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-18 02:45:48.891304 | orchestrator | 2026-03-18 02:45:48.891311 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-18 02:45:48.891318 | orchestrator | Wednesday 18 March 2026 02:45:13 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-03-18 02:45:48.891326 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-18 02:45:48.891334 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891346 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 02:45:48.891353 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891360 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-18 02:45:48.891367 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-18 02:45:48.891374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-18 02:45:48.891381 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-18 02:45:48.891387 | orchestrator | 2026-03-18 02:45:48.891394 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-18 02:45:48.891418 | orchestrator | Wednesday 18 March 2026 02:45:18 +0000 (0:00:04.715) 0:00:04.886 ******* 2026-03-18 02:45:48.891425 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-18 02:45:48.891431 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891438 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891456 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 02:45:48.891463 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891469 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-18 02:45:48.891475 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-18 02:45:48.891482 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-18 02:45:48.891488 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-18 02:45:48.891495 | orchestrator | 2026-03-18 02:45:48.891501 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-18 02:45:48.891507 | orchestrator | Wednesday 18 March 2026 02:45:22 +0000 (0:00:04.198) 0:00:09.084 ******* 2026-03-18 02:45:48.891514 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-18 02:45:48.891520 | orchestrator | 2026-03-18 02:45:48.891526 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-18 02:45:48.891533 | orchestrator | Wednesday 18 March 2026 02:45:23 +0000 (0:00:01.021) 0:00:10.106 ******* 2026-03-18 02:45:48.891539 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-18 02:45:48.891547 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891553 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891560 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 02:45:48.891567 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891573 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-18 02:45:48.891579 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-18 02:45:48.891585 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-18 02:45:48.891591 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-18 02:45:48.891597 | orchestrator | 2026-03-18 02:45:48.891603 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-18 02:45:48.891610 | orchestrator | Wednesday 18 March 2026 02:45:37 +0000 (0:00:14.387) 0:00:24.494 ******* 2026-03-18 02:45:48.891616 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-18 02:45:48.891623 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-18 02:45:48.891630 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-18 02:45:48.891636 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-18 02:45:48.891657 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-18 02:45:48.891664 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-18 02:45:48.891670 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-18 02:45:48.891684 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-18 02:45:48.891690 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-18 02:45:48.891696 | orchestrator | 2026-03-18 02:45:48.891703 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-18 02:45:48.891710 | orchestrator | Wednesday 18 March 2026 02:45:41 +0000 (0:00:03.229) 0:00:27.723 ******* 2026-03-18 02:45:48.891718 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-18 02:45:48.891725 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891732 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891738 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 02:45:48.891745 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-18 02:45:48.891752 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-18 02:45:48.891758 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-18 02:45:48.891765 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-18 02:45:48.891772 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-18 02:45:48.891778 | orchestrator | 2026-03-18 02:45:48.891786 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:45:48.891792 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:45:48.891801 | orchestrator | 2026-03-18 02:45:48.891807 | orchestrator | 2026-03-18 02:45:48.891814 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:45:48.891821 | orchestrator | Wednesday 18 March 2026 02:45:48 +0000 (0:00:07.407) 0:00:35.130 ******* 2026-03-18 02:45:48.891831 | orchestrator | =============================================================================== 2026-03-18 02:45:48.891838 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.39s 2026-03-18 02:45:48.891845 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.41s 2026-03-18 02:45:48.891852 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.72s 2026-03-18 02:45:48.891858 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.20s 2026-03-18 02:45:48.891864 | orchestrator | Check if target directories exist --------------------------------------- 3.23s 2026-03-18 02:45:48.891872 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-03-18 02:46:01.612066 | orchestrator | 2026-03-18 02:46:01 | INFO  | Task 4cf44705-a816-407b-9730-08276069b16d (cephclient) was prepared for execution. 2026-03-18 02:46:01.612237 | orchestrator | 2026-03-18 02:46:01 | INFO  | It takes a moment until task 4cf44705-a816-407b-9730-08276069b16d (cephclient) has been started and output is visible here. 2026-03-18 02:47:03.756598 | orchestrator | 2026-03-18 02:47:03.756712 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-18 02:47:03.756726 | orchestrator | 2026-03-18 02:47:03.756737 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-18 02:47:03.756748 | orchestrator | Wednesday 18 March 2026 02:46:06 +0000 (0:00:00.248) 0:00:00.248 ******* 2026-03-18 02:47:03.756758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-18 02:47:03.756770 | orchestrator | 2026-03-18 02:47:03.756780 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-18 02:47:03.756789 | orchestrator | Wednesday 18 March 2026 02:46:06 +0000 (0:00:00.259) 0:00:00.508 ******* 2026-03-18 02:47:03.756800 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-18 02:47:03.756830 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-18 02:47:03.756855 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-18 02:47:03.756865 | orchestrator | 2026-03-18 02:47:03.756876 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-18 02:47:03.756894 | orchestrator | Wednesday 18 March 2026 02:46:07 +0000 (0:00:01.434) 0:00:01.942 ******* 2026-03-18 02:47:03.756912 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-18 02:47:03.756929 | orchestrator | 2026-03-18 02:47:03.756945 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-18 02:47:03.756961 | orchestrator | Wednesday 18 March 2026 02:46:09 +0000 (0:00:01.645) 0:00:03.588 ******* 2026-03-18 02:47:03.756979 | orchestrator | changed: [testbed-manager] 2026-03-18 02:47:03.756996 | orchestrator | 2026-03-18 02:47:03.757014 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-18 02:47:03.757031 | orchestrator | Wednesday 18 March 2026 02:46:10 +0000 (0:00:00.964) 0:00:04.553 ******* 2026-03-18 02:47:03.757044 | orchestrator | changed: [testbed-manager] 2026-03-18 02:47:03.757054 | orchestrator | 2026-03-18 02:47:03.757064 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-18 02:47:03.757073 | orchestrator | Wednesday 18 March 2026 02:46:11 +0000 (0:00:01.030) 0:00:05.584 ******* 2026-03-18 02:47:03.757083 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-18 02:47:03.757092 | orchestrator | ok: [testbed-manager] 2026-03-18 02:47:03.757102 | orchestrator | 2026-03-18 02:47:03.757141 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-18 02:47:03.757154 | orchestrator | Wednesday 18 March 2026 02:46:53 +0000 (0:00:41.884) 0:00:47.468 ******* 2026-03-18 02:47:03.757166 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-18 02:47:03.757177 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-18 02:47:03.757189 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-18 02:47:03.757200 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-18 02:47:03.757211 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-18 02:47:03.757222 | orchestrator | 2026-03-18 02:47:03.757233 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-18 02:47:03.757245 | orchestrator | Wednesday 18 March 2026 02:46:57 +0000 (0:00:04.137) 0:00:51.606 ******* 2026-03-18 02:47:03.757256 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-18 02:47:03.757267 | orchestrator | 2026-03-18 02:47:03.757279 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-18 02:47:03.757290 | orchestrator | Wednesday 18 March 2026 02:46:58 +0000 (0:00:00.497) 0:00:52.104 ******* 2026-03-18 02:47:03.757301 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:47:03.757312 | orchestrator | 2026-03-18 02:47:03.757323 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-18 02:47:03.757334 | orchestrator | Wednesday 18 March 2026 02:46:58 +0000 (0:00:00.151) 0:00:52.255 ******* 2026-03-18 02:47:03.757345 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:47:03.757356 | orchestrator | 2026-03-18 02:47:03.757368 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-18 02:47:03.757379 | orchestrator | Wednesday 18 March 2026 02:46:58 +0000 (0:00:00.578) 0:00:52.833 ******* 2026-03-18 02:47:03.757390 | orchestrator | changed: [testbed-manager] 2026-03-18 02:47:03.757401 | orchestrator | 2026-03-18 02:47:03.757412 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-18 02:47:03.757423 | orchestrator | Wednesday 18 March 2026 02:47:00 +0000 (0:00:01.625) 0:00:54.458 ******* 2026-03-18 02:47:03.757435 | orchestrator | changed: [testbed-manager] 2026-03-18 02:47:03.757446 | orchestrator | 2026-03-18 02:47:03.757470 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-18 02:47:03.757496 | orchestrator | Wednesday 18 March 2026 02:47:01 +0000 (0:00:00.774) 0:00:55.233 ******* 2026-03-18 02:47:03.757507 | orchestrator | changed: [testbed-manager] 2026-03-18 02:47:03.757517 | orchestrator | 2026-03-18 02:47:03.757526 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-18 02:47:03.757535 | orchestrator | Wednesday 18 March 2026 02:47:01 +0000 (0:00:00.639) 0:00:55.873 ******* 2026-03-18 02:47:03.757545 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-18 02:47:03.757554 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-18 02:47:03.757564 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-18 02:47:03.757574 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-18 02:47:03.757583 | orchestrator | 2026-03-18 02:47:03.757592 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:47:03.757602 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 02:47:03.757613 | orchestrator | 2026-03-18 02:47:03.757622 | orchestrator | 2026-03-18 02:47:03.757662 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:47:03.757674 | orchestrator | Wednesday 18 March 2026 02:47:03 +0000 (0:00:01.547) 0:00:57.420 ******* 2026-03-18 02:47:03.757683 | orchestrator | =============================================================================== 2026-03-18 02:47:03.757692 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.88s 2026-03-18 02:47:03.757702 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.14s 2026-03-18 02:47:03.757711 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.65s 2026-03-18 02:47:03.757721 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.63s 2026-03-18 02:47:03.757730 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.55s 2026-03-18 02:47:03.757739 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.43s 2026-03-18 02:47:03.757749 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.03s 2026-03-18 02:47:03.757758 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-03-18 02:47:03.757768 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2026-03-18 02:47:03.757777 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2026-03-18 02:47:03.757787 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-03-18 02:47:03.757796 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-18 02:47:03.757806 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-03-18 02:47:03.757815 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-18 02:47:06.413760 | orchestrator | 2026-03-18 02:47:06 | INFO  | Task 481338d4-cc22-4d73-a850-30416575784b (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-18 02:47:06.413861 | orchestrator | 2026-03-18 02:47:06 | INFO  | It takes a moment until task 481338d4-cc22-4d73-a850-30416575784b (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-18 02:48:26.093613 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 02:48:26.093763 | orchestrator | 2.16.14 2026-03-18 02:48:26.093793 | orchestrator | 2026-03-18 02:48:26.093814 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-18 02:48:26.093834 | orchestrator | 2026-03-18 02:48:26.093853 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-18 02:48:26.093872 | orchestrator | Wednesday 18 March 2026 02:47:11 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-03-18 02:48:26.093891 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.093910 | orchestrator | 2026-03-18 02:48:26.093961 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-18 02:48:26.093979 | orchestrator | Wednesday 18 March 2026 02:47:13 +0000 (0:00:01.977) 0:00:02.266 ******* 2026-03-18 02:48:26.093997 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094097 | orchestrator | 2026-03-18 02:48:26.094200 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-18 02:48:26.094213 | orchestrator | Wednesday 18 March 2026 02:47:14 +0000 (0:00:01.153) 0:00:03.420 ******* 2026-03-18 02:48:26.094225 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094238 | orchestrator | 2026-03-18 02:48:26.094259 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-18 02:48:26.094278 | orchestrator | Wednesday 18 March 2026 02:47:15 +0000 (0:00:01.174) 0:00:04.594 ******* 2026-03-18 02:48:26.094295 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094316 | orchestrator | 2026-03-18 02:48:26.094337 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-18 02:48:26.094358 | orchestrator | Wednesday 18 March 2026 02:47:16 +0000 (0:00:01.266) 0:00:05.861 ******* 2026-03-18 02:48:26.094380 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094402 | orchestrator | 2026-03-18 02:48:26.094422 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-18 02:48:26.094435 | orchestrator | Wednesday 18 March 2026 02:47:18 +0000 (0:00:01.154) 0:00:07.016 ******* 2026-03-18 02:48:26.094448 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094460 | orchestrator | 2026-03-18 02:48:26.094473 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-18 02:48:26.094486 | orchestrator | Wednesday 18 March 2026 02:47:19 +0000 (0:00:01.154) 0:00:08.170 ******* 2026-03-18 02:48:26.094516 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094528 | orchestrator | 2026-03-18 02:48:26.094539 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-18 02:48:26.094550 | orchestrator | Wednesday 18 March 2026 02:47:21 +0000 (0:00:02.078) 0:00:10.249 ******* 2026-03-18 02:48:26.094561 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094572 | orchestrator | 2026-03-18 02:48:26.094583 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-18 02:48:26.094593 | orchestrator | Wednesday 18 March 2026 02:47:22 +0000 (0:00:01.319) 0:00:11.569 ******* 2026-03-18 02:48:26.094604 | orchestrator | changed: [testbed-manager] 2026-03-18 02:48:26.094615 | orchestrator | 2026-03-18 02:48:26.094625 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-18 02:48:26.094637 | orchestrator | Wednesday 18 March 2026 02:48:01 +0000 (0:00:38.666) 0:00:50.236 ******* 2026-03-18 02:48:26.094647 | orchestrator | skipping: [testbed-manager] 2026-03-18 02:48:26.094658 | orchestrator | 2026-03-18 02:48:26.094672 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-18 02:48:26.094689 | orchestrator | 2026-03-18 02:48:26.094707 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-18 02:48:26.094725 | orchestrator | Wednesday 18 March 2026 02:48:01 +0000 (0:00:00.176) 0:00:50.412 ******* 2026-03-18 02:48:26.094742 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:48:26.094761 | orchestrator | 2026-03-18 02:48:26.094779 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-18 02:48:26.094796 | orchestrator | 2026-03-18 02:48:26.094814 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-18 02:48:26.094833 | orchestrator | Wednesday 18 March 2026 02:48:03 +0000 (0:00:01.739) 0:00:52.151 ******* 2026-03-18 02:48:26.094852 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:48:26.094871 | orchestrator | 2026-03-18 02:48:26.094905 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-18 02:48:26.094924 | orchestrator | 2026-03-18 02:48:26.094943 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-18 02:48:26.094963 | orchestrator | Wednesday 18 March 2026 02:48:14 +0000 (0:00:11.230) 0:01:03.382 ******* 2026-03-18 02:48:26.094999 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:48:26.095018 | orchestrator | 2026-03-18 02:48:26.095038 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:48:26.095057 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 02:48:26.095137 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:48:26.095159 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:48:26.095178 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 02:48:26.095197 | orchestrator | 2026-03-18 02:48:26.095216 | orchestrator | 2026-03-18 02:48:26.095234 | orchestrator | 2026-03-18 02:48:26.095254 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:48:26.095266 | orchestrator | Wednesday 18 March 2026 02:48:25 +0000 (0:00:11.259) 0:01:14.641 ******* 2026-03-18 02:48:26.095283 | orchestrator | =============================================================================== 2026-03-18 02:48:26.095301 | orchestrator | Create admin user ------------------------------------------------------ 38.67s 2026-03-18 02:48:26.095351 | orchestrator | Restart ceph manager service ------------------------------------------- 24.23s 2026-03-18 02:48:26.095371 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.08s 2026-03-18 02:48:26.095390 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.98s 2026-03-18 02:48:26.095409 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.32s 2026-03-18 02:48:26.095427 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.27s 2026-03-18 02:48:26.095445 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.17s 2026-03-18 02:48:26.095459 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.15s 2026-03-18 02:48:26.095470 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.15s 2026-03-18 02:48:26.095481 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.15s 2026-03-18 02:48:26.095491 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-03-18 02:48:26.438655 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-18 02:48:28.686803 | orchestrator | 2026-03-18 02:48:28 | INFO  | Task a79d6a39-3d4d-49c1-a47a-9f314333fe78 (keystone) was prepared for execution. 2026-03-18 02:48:28.686885 | orchestrator | 2026-03-18 02:48:28 | INFO  | It takes a moment until task a79d6a39-3d4d-49c1-a47a-9f314333fe78 (keystone) has been started and output is visible here. 2026-03-18 02:48:36.379201 | orchestrator | 2026-03-18 02:48:36.379326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:48:36.379341 | orchestrator | 2026-03-18 02:48:36.379351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:48:36.379360 | orchestrator | Wednesday 18 March 2026 02:48:33 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-03-18 02:48:36.379369 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:48:36.379418 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:48:36.379435 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:48:36.379457 | orchestrator | 2026-03-18 02:48:36.379475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:48:36.379508 | orchestrator | Wednesday 18 March 2026 02:48:33 +0000 (0:00:00.335) 0:00:00.639 ******* 2026-03-18 02:48:36.379523 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-18 02:48:36.379539 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-18 02:48:36.379554 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-18 02:48:36.379592 | orchestrator | 2026-03-18 02:48:36.379609 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-18 02:48:36.379625 | orchestrator | 2026-03-18 02:48:36.379641 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:48:36.379656 | orchestrator | Wednesday 18 March 2026 02:48:33 +0000 (0:00:00.475) 0:00:01.114 ******* 2026-03-18 02:48:36.379673 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:48:36.379690 | orchestrator | 2026-03-18 02:48:36.379707 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-18 02:48:36.379720 | orchestrator | Wednesday 18 March 2026 02:48:34 +0000 (0:00:00.649) 0:00:01.764 ******* 2026-03-18 02:48:36.379737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:36.379753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:36.379766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:36.379836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:36.379878 | orchestrator | 2026-03-18 02:48:36.379888 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-18 02:48:36.379906 | orchestrator | Wednesday 18 March 2026 02:48:36 +0000 (0:00:01.733) 0:00:03.497 ******* 2026-03-18 02:48:42.377867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:42.377973 | orchestrator | 2026-03-18 02:48:42.377989 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-18 02:48:42.378003 | orchestrator | Wednesday 18 March 2026 02:48:36 +0000 (0:00:00.340) 0:00:03.838 ******* 2026-03-18 02:48:42.378075 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:42.378088 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:42.378131 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:42.378144 | orchestrator | 2026-03-18 02:48:42.378182 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-18 02:48:42.378193 | orchestrator | Wednesday 18 March 2026 02:48:37 +0000 (0:00:00.337) 0:00:04.175 ******* 2026-03-18 02:48:42.378205 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:48:42.378216 | orchestrator | 2026-03-18 02:48:42.378227 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:48:42.378238 | orchestrator | Wednesday 18 March 2026 02:48:38 +0000 (0:00:01.002) 0:00:05.178 ******* 2026-03-18 02:48:42.378249 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:48:42.378261 | orchestrator | 2026-03-18 02:48:42.378272 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-18 02:48:42.378283 | orchestrator | Wednesday 18 March 2026 02:48:38 +0000 (0:00:00.588) 0:00:05.767 ******* 2026-03-18 02:48:42.378301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:42.378318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:42.378346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:42.378407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:42.378498 | orchestrator | 2026-03-18 02:48:42.378510 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-18 02:48:42.378523 | orchestrator | Wednesday 18 March 2026 02:48:41 +0000 (0:00:03.106) 0:00:08.873 ******* 2026-03-18 02:48:42.378546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:43.256076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:43.256277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:43.256306 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:43.256321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:43.256353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:43.256362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:43.256370 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:43.256402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:43.256412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:43.256420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:43.256428 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:43.256437 | orchestrator | 2026-03-18 02:48:43.256446 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-18 02:48:43.256455 | orchestrator | Wednesday 18 March 2026 02:48:42 +0000 (0:00:00.630) 0:00:09.504 ******* 2026-03-18 02:48:43.256471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:43.256484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:43.256500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:46.390998 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:46.391073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:46.391081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:46.391117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:46.391122 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:46.391126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:46.391142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:46.391155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:46.391159 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:46.391163 | orchestrator | 2026-03-18 02:48:46.391168 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-18 02:48:46.391173 | orchestrator | Wednesday 18 March 2026 02:48:43 +0000 (0:00:00.875) 0:00:10.379 ******* 2026-03-18 02:48:46.391177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:46.391185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:46.391193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:46.391201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:51.269960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:51.270208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:48:51.270248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:51.270256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:51.270275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:51.270282 | orchestrator | 2026-03-18 02:48:51.270290 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-18 02:48:51.270297 | orchestrator | Wednesday 18 March 2026 02:48:46 +0000 (0:00:03.135) 0:00:13.515 ******* 2026-03-18 02:48:51.270322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:51.270331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:51.270349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:51.270361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:51.270378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:48:51.270397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:55.135493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:55.135615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:55.135627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:48:55.135637 | orchestrator | 2026-03-18 02:48:55.135646 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-18 02:48:55.135655 | orchestrator | Wednesday 18 March 2026 02:48:51 +0000 (0:00:04.874) 0:00:18.390 ******* 2026-03-18 02:48:55.135663 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:48:55.135671 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:48:55.135678 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:48:55.135686 | orchestrator | 2026-03-18 02:48:55.135693 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-18 02:48:55.135700 | orchestrator | Wednesday 18 March 2026 02:48:52 +0000 (0:00:01.451) 0:00:19.841 ******* 2026-03-18 02:48:55.135708 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:55.135715 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:55.135722 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:55.135729 | orchestrator | 2026-03-18 02:48:55.135736 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-18 02:48:55.135743 | orchestrator | Wednesday 18 March 2026 02:48:53 +0000 (0:00:00.845) 0:00:20.687 ******* 2026-03-18 02:48:55.135751 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:55.135758 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:55.135765 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:55.135772 | orchestrator | 2026-03-18 02:48:55.135779 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-18 02:48:55.135786 | orchestrator | Wednesday 18 March 2026 02:48:54 +0000 (0:00:00.559) 0:00:21.246 ******* 2026-03-18 02:48:55.135793 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:55.135800 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:55.135810 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:48:55.135838 | orchestrator | 2026-03-18 02:48:55.135851 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-18 02:48:55.135863 | orchestrator | Wednesday 18 March 2026 02:48:54 +0000 (0:00:00.341) 0:00:21.588 ******* 2026-03-18 02:48:55.135899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:55.135926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:55.135940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:55.135952 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:48:55.135961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:48:55.135974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:48:55.135982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:48:55.135996 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:48:55.136010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-18 02:49:14.478145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 02:49:14.478273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 02:49:14.478292 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:49:14.478306 | orchestrator | 2026-03-18 02:49:14.478319 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:49:14.478332 | orchestrator | Wednesday 18 March 2026 02:48:55 +0000 (0:00:00.666) 0:00:22.254 ******* 2026-03-18 02:49:14.478343 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:49:14.478354 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:49:14.478364 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:49:14.478375 | orchestrator | 2026-03-18 02:49:14.478386 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-18 02:49:14.478397 | orchestrator | Wednesday 18 March 2026 02:48:55 +0000 (0:00:00.323) 0:00:22.578 ******* 2026-03-18 02:49:14.478408 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-18 02:49:14.478420 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-18 02:49:14.478431 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-18 02:49:14.478442 | orchestrator | 2026-03-18 02:49:14.478453 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-18 02:49:14.478463 | orchestrator | Wednesday 18 March 2026 02:48:57 +0000 (0:00:01.792) 0:00:24.371 ******* 2026-03-18 02:49:14.478496 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:49:14.478507 | orchestrator | 2026-03-18 02:49:14.478518 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-18 02:49:14.478547 | orchestrator | Wednesday 18 March 2026 02:48:58 +0000 (0:00:01.011) 0:00:25.382 ******* 2026-03-18 02:49:14.478567 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:49:14.478579 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:49:14.478590 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:49:14.478600 | orchestrator | 2026-03-18 02:49:14.478611 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-18 02:49:14.478622 | orchestrator | Wednesday 18 March 2026 02:48:58 +0000 (0:00:00.599) 0:00:25.982 ******* 2026-03-18 02:49:14.478633 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 02:49:14.478643 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 02:49:14.478654 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:49:14.478665 | orchestrator | 2026-03-18 02:49:14.478676 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-18 02:49:14.478687 | orchestrator | Wednesday 18 March 2026 02:48:59 +0000 (0:00:01.078) 0:00:27.060 ******* 2026-03-18 02:49:14.478698 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:49:14.478710 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:49:14.478720 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:49:14.478731 | orchestrator | 2026-03-18 02:49:14.478742 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-18 02:49:14.478752 | orchestrator | Wednesday 18 March 2026 02:49:00 +0000 (0:00:00.549) 0:00:27.610 ******* 2026-03-18 02:49:14.478763 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-18 02:49:14.478774 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-18 02:49:14.478785 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-18 02:49:14.478796 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-18 02:49:14.478807 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-18 02:49:14.478817 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-18 02:49:14.478828 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-18 02:49:14.478840 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-18 02:49:14.478947 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-18 02:49:14.478974 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-18 02:49:14.478986 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-18 02:49:14.478997 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-18 02:49:14.479008 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-18 02:49:14.479019 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-18 02:49:14.479030 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-18 02:49:14.479040 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 02:49:14.479051 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 02:49:14.479062 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 02:49:14.479085 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 02:49:14.479096 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 02:49:14.479142 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 02:49:14.479154 | orchestrator | 2026-03-18 02:49:14.479165 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-18 02:49:14.479176 | orchestrator | Wednesday 18 March 2026 02:49:09 +0000 (0:00:09.021) 0:00:36.632 ******* 2026-03-18 02:49:14.479186 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 02:49:14.479197 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 02:49:14.479208 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 02:49:14.479218 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 02:49:14.479229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 02:49:14.479240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 02:49:14.479250 | orchestrator | 2026-03-18 02:49:14.479261 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-18 02:49:14.479272 | orchestrator | Wednesday 18 March 2026 02:49:12 +0000 (0:00:02.670) 0:00:39.302 ******* 2026-03-18 02:49:14.479293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:49:14.479328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:50:55.887219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-18 02:50:55.887324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-18 02:50:55.887396 | orchestrator | 2026-03-18 02:50:55.887403 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:50:55.887410 | orchestrator | Wednesday 18 March 2026 02:49:14 +0000 (0:00:02.296) 0:00:41.599 ******* 2026-03-18 02:50:55.887416 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:50:55.887422 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:50:55.887428 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:50:55.887441 | orchestrator | 2026-03-18 02:50:55.887447 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-18 02:50:55.887453 | orchestrator | Wednesday 18 March 2026 02:49:15 +0000 (0:00:00.554) 0:00:42.153 ******* 2026-03-18 02:50:55.887458 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887463 | orchestrator | 2026-03-18 02:50:55.887469 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-18 02:50:55.887474 | orchestrator | Wednesday 18 March 2026 02:49:17 +0000 (0:00:02.188) 0:00:44.342 ******* 2026-03-18 02:50:55.887480 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887485 | orchestrator | 2026-03-18 02:50:55.887490 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-18 02:50:55.887496 | orchestrator | Wednesday 18 March 2026 02:49:19 +0000 (0:00:02.130) 0:00:46.472 ******* 2026-03-18 02:50:55.887501 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:50:55.887507 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:50:55.887512 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:50:55.887517 | orchestrator | 2026-03-18 02:50:55.887523 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-18 02:50:55.887528 | orchestrator | Wednesday 18 March 2026 02:49:20 +0000 (0:00:00.878) 0:00:47.350 ******* 2026-03-18 02:50:55.887534 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:50:55.887539 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:50:55.887544 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:50:55.887550 | orchestrator | 2026-03-18 02:50:55.887555 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-18 02:50:55.887561 | orchestrator | Wednesday 18 March 2026 02:49:20 +0000 (0:00:00.343) 0:00:47.694 ******* 2026-03-18 02:50:55.887567 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:50:55.887572 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:50:55.887577 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:50:55.887583 | orchestrator | 2026-03-18 02:50:55.887592 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-18 02:50:55.887597 | orchestrator | Wednesday 18 March 2026 02:49:21 +0000 (0:00:00.633) 0:00:48.327 ******* 2026-03-18 02:50:55.887603 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887608 | orchestrator | 2026-03-18 02:50:55.887613 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-18 02:50:55.887619 | orchestrator | Wednesday 18 March 2026 02:49:35 +0000 (0:00:14.135) 0:01:02.463 ******* 2026-03-18 02:50:55.887624 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887629 | orchestrator | 2026-03-18 02:50:55.887635 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-18 02:50:55.887640 | orchestrator | Wednesday 18 March 2026 02:49:45 +0000 (0:00:10.364) 0:01:12.827 ******* 2026-03-18 02:50:55.887645 | orchestrator | 2026-03-18 02:50:55.887651 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-18 02:50:55.887656 | orchestrator | Wednesday 18 March 2026 02:49:45 +0000 (0:00:00.083) 0:01:12.911 ******* 2026-03-18 02:50:55.887666 | orchestrator | 2026-03-18 02:50:55.887671 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-18 02:50:55.887677 | orchestrator | Wednesday 18 March 2026 02:49:45 +0000 (0:00:00.074) 0:01:12.985 ******* 2026-03-18 02:50:55.887682 | orchestrator | 2026-03-18 02:50:55.887688 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-18 02:50:55.887694 | orchestrator | Wednesday 18 March 2026 02:49:45 +0000 (0:00:00.073) 0:01:13.059 ******* 2026-03-18 02:50:55.887700 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887706 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:50:55.887712 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:50:55.887718 | orchestrator | 2026-03-18 02:50:55.887724 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-18 02:50:55.887730 | orchestrator | Wednesday 18 March 2026 02:50:33 +0000 (0:00:47.292) 0:02:00.351 ******* 2026-03-18 02:50:55.887736 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887742 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:50:55.887748 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:50:55.887754 | orchestrator | 2026-03-18 02:50:55.887761 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-18 02:50:55.887767 | orchestrator | Wednesday 18 March 2026 02:50:43 +0000 (0:00:10.287) 0:02:10.638 ******* 2026-03-18 02:50:55.887773 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:50:55.887779 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:50:55.887785 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:50:55.887791 | orchestrator | 2026-03-18 02:50:55.887797 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:50:55.887803 | orchestrator | Wednesday 18 March 2026 02:50:55 +0000 (0:00:11.760) 0:02:22.399 ******* 2026-03-18 02:50:55.887813 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:51:46.306322 | orchestrator | 2026-03-18 02:51:46.306413 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-18 02:51:46.306423 | orchestrator | Wednesday 18 March 2026 02:50:55 +0000 (0:00:00.614) 0:02:23.014 ******* 2026-03-18 02:51:46.306429 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:51:46.306435 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:51:46.306440 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:51:46.306445 | orchestrator | 2026-03-18 02:51:46.306450 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-18 02:51:46.306456 | orchestrator | Wednesday 18 March 2026 02:50:57 +0000 (0:00:01.238) 0:02:24.252 ******* 2026-03-18 02:51:46.306460 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:51:46.306466 | orchestrator | 2026-03-18 02:51:46.306471 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-18 02:51:46.306476 | orchestrator | Wednesday 18 March 2026 02:50:58 +0000 (0:00:01.847) 0:02:26.100 ******* 2026-03-18 02:51:46.306481 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-18 02:51:46.306486 | orchestrator | 2026-03-18 02:51:46.306490 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-18 02:51:46.306496 | orchestrator | Wednesday 18 March 2026 02:51:10 +0000 (0:00:11.153) 0:02:37.253 ******* 2026-03-18 02:51:46.306504 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-18 02:51:46.306511 | orchestrator | 2026-03-18 02:51:46.306518 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-18 02:51:46.306526 | orchestrator | Wednesday 18 March 2026 02:51:34 +0000 (0:00:24.275) 0:03:01.529 ******* 2026-03-18 02:51:46.306533 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-18 02:51:46.306542 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-18 02:51:46.306549 | orchestrator | 2026-03-18 02:51:46.306556 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-18 02:51:46.306563 | orchestrator | Wednesday 18 March 2026 02:51:41 +0000 (0:00:06.725) 0:03:08.254 ******* 2026-03-18 02:51:46.306591 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:51:46.306599 | orchestrator | 2026-03-18 02:51:46.306607 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-18 02:51:46.306615 | orchestrator | Wednesday 18 March 2026 02:51:41 +0000 (0:00:00.157) 0:03:08.411 ******* 2026-03-18 02:51:46.306621 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:51:46.306626 | orchestrator | 2026-03-18 02:51:46.306631 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-18 02:51:46.306635 | orchestrator | Wednesday 18 March 2026 02:51:41 +0000 (0:00:00.130) 0:03:08.541 ******* 2026-03-18 02:51:46.306640 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:51:46.306644 | orchestrator | 2026-03-18 02:51:46.306648 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-18 02:51:46.306653 | orchestrator | Wednesday 18 March 2026 02:51:41 +0000 (0:00:00.160) 0:03:08.702 ******* 2026-03-18 02:51:46.306658 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:51:46.306662 | orchestrator | 2026-03-18 02:51:46.306679 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-18 02:51:46.306684 | orchestrator | Wednesday 18 March 2026 02:51:42 +0000 (0:00:00.590) 0:03:09.293 ******* 2026-03-18 02:51:46.306688 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:51:46.306693 | orchestrator | 2026-03-18 02:51:46.306697 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-18 02:51:46.306702 | orchestrator | Wednesday 18 March 2026 02:51:45 +0000 (0:00:03.208) 0:03:12.502 ******* 2026-03-18 02:51:46.306706 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:51:46.306711 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:51:46.306731 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:51:46.306738 | orchestrator | 2026-03-18 02:51:46.306753 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:51:46.306762 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 02:51:46.306771 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 02:51:46.306779 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 02:51:46.306785 | orchestrator | 2026-03-18 02:51:46.306790 | orchestrator | 2026-03-18 02:51:46.306795 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:51:46.306799 | orchestrator | Wednesday 18 March 2026 02:51:45 +0000 (0:00:00.501) 0:03:13.003 ******* 2026-03-18 02:51:46.306804 | orchestrator | =============================================================================== 2026-03-18 02:51:46.306808 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 47.29s 2026-03-18 02:51:46.306813 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.28s 2026-03-18 02:51:46.306819 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.14s 2026-03-18 02:51:46.306826 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.76s 2026-03-18 02:51:46.306833 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.15s 2026-03-18 02:51:46.306840 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.36s 2026-03-18 02:51:46.306847 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.29s 2026-03-18 02:51:46.306854 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.02s 2026-03-18 02:51:46.306861 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.73s 2026-03-18 02:51:46.306887 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.87s 2026-03-18 02:51:46.306893 | orchestrator | keystone : Creating default user role ----------------------------------- 3.21s 2026-03-18 02:51:46.306905 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.14s 2026-03-18 02:51:46.306910 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.11s 2026-03-18 02:51:46.306915 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2026-03-18 02:51:46.306920 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.30s 2026-03-18 02:51:46.306925 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.19s 2026-03-18 02:51:46.306930 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.13s 2026-03-18 02:51:46.306935 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2026-03-18 02:51:46.306941 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.79s 2026-03-18 02:51:46.306946 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.73s 2026-03-18 02:51:48.842361 | orchestrator | 2026-03-18 02:51:48 | INFO  | Task ff9b2718-6a3d-4f8a-8b1e-0d5e18f44b18 (placement) was prepared for execution. 2026-03-18 02:51:48.842440 | orchestrator | 2026-03-18 02:51:48 | INFO  | It takes a moment until task ff9b2718-6a3d-4f8a-8b1e-0d5e18f44b18 (placement) has been started and output is visible here. 2026-03-18 02:52:24.645563 | orchestrator | 2026-03-18 02:52:24.645694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:52:24.645709 | orchestrator | 2026-03-18 02:52:24.645719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:52:24.645729 | orchestrator | Wednesday 18 March 2026 02:51:53 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-03-18 02:52:24.645738 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:52:24.645748 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:52:24.645758 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:52:24.645766 | orchestrator | 2026-03-18 02:52:24.645777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:52:24.645786 | orchestrator | Wednesday 18 March 2026 02:51:53 +0000 (0:00:00.314) 0:00:00.612 ******* 2026-03-18 02:52:24.645795 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-18 02:52:24.645805 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-18 02:52:24.645813 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-18 02:52:24.645822 | orchestrator | 2026-03-18 02:52:24.645830 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-18 02:52:24.645839 | orchestrator | 2026-03-18 02:52:24.645847 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-18 02:52:24.645856 | orchestrator | Wednesday 18 March 2026 02:51:54 +0000 (0:00:00.478) 0:00:01.091 ******* 2026-03-18 02:52:24.645883 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:52:24.645893 | orchestrator | 2026-03-18 02:52:24.645901 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-18 02:52:24.645910 | orchestrator | Wednesday 18 March 2026 02:51:54 +0000 (0:00:00.636) 0:00:01.727 ******* 2026-03-18 02:52:24.645919 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-18 02:52:24.645927 | orchestrator | 2026-03-18 02:52:24.645936 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-18 02:52:24.645944 | orchestrator | Wednesday 18 March 2026 02:51:58 +0000 (0:00:03.834) 0:00:05.562 ******* 2026-03-18 02:52:24.645953 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-18 02:52:24.645962 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-18 02:52:24.645971 | orchestrator | 2026-03-18 02:52:24.645979 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-18 02:52:24.646089 | orchestrator | Wednesday 18 March 2026 02:52:05 +0000 (0:00:06.576) 0:00:12.138 ******* 2026-03-18 02:52:24.646136 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-18 02:52:24.646152 | orchestrator | 2026-03-18 02:52:24.646168 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-18 02:52:24.646180 | orchestrator | Wednesday 18 March 2026 02:52:09 +0000 (0:00:03.791) 0:00:15.929 ******* 2026-03-18 02:52:24.646190 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 02:52:24.646204 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-18 02:52:24.646230 | orchestrator | 2026-03-18 02:52:24.646246 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-18 02:52:24.646262 | orchestrator | Wednesday 18 March 2026 02:52:13 +0000 (0:00:04.114) 0:00:20.044 ******* 2026-03-18 02:52:24.646276 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 02:52:24.646290 | orchestrator | 2026-03-18 02:52:24.646299 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-18 02:52:24.646311 | orchestrator | Wednesday 18 March 2026 02:52:16 +0000 (0:00:03.239) 0:00:23.283 ******* 2026-03-18 02:52:24.646329 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-18 02:52:24.646348 | orchestrator | 2026-03-18 02:52:24.646366 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-18 02:52:24.646380 | orchestrator | Wednesday 18 March 2026 02:52:20 +0000 (0:00:03.761) 0:00:27.045 ******* 2026-03-18 02:52:24.646391 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:24.646410 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:52:24.646429 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:52:24.646448 | orchestrator | 2026-03-18 02:52:24.646466 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-18 02:52:24.646477 | orchestrator | Wednesday 18 March 2026 02:52:20 +0000 (0:00:00.383) 0:00:27.428 ******* 2026-03-18 02:52:24.646498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:24.646554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:24.646580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:24.646618 | orchestrator | 2026-03-18 02:52:24.646638 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-18 02:52:24.646657 | orchestrator | Wednesday 18 March 2026 02:52:21 +0000 (0:00:01.083) 0:00:28.511 ******* 2026-03-18 02:52:24.646676 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:24.646693 | orchestrator | 2026-03-18 02:52:24.646711 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-18 02:52:24.646728 | orchestrator | Wednesday 18 March 2026 02:52:21 +0000 (0:00:00.341) 0:00:28.853 ******* 2026-03-18 02:52:24.646745 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:24.646762 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:52:24.646780 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:52:24.646796 | orchestrator | 2026-03-18 02:52:24.646814 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-18 02:52:24.646831 | orchestrator | Wednesday 18 March 2026 02:52:22 +0000 (0:00:00.323) 0:00:29.176 ******* 2026-03-18 02:52:24.646848 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:52:24.646864 | orchestrator | 2026-03-18 02:52:24.646880 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-18 02:52:24.646897 | orchestrator | Wednesday 18 March 2026 02:52:22 +0000 (0:00:00.579) 0:00:29.756 ******* 2026-03-18 02:52:24.646913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:24.646949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:27.525759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:27.525944 | orchestrator | 2026-03-18 02:52:27.525975 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-18 02:52:27.525995 | orchestrator | Wednesday 18 March 2026 02:52:24 +0000 (0:00:01.744) 0:00:31.500 ******* 2026-03-18 02:52:27.526136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526164 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:27.526183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526199 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:52:27.526215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526248 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:52:27.526267 | orchestrator | 2026-03-18 02:52:27.526285 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-18 02:52:27.526345 | orchestrator | Wednesday 18 March 2026 02:52:25 +0000 (0:00:00.525) 0:00:32.026 ******* 2026-03-18 02:52:27.526390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526409 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:27.526427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526446 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:52:27.526463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:27.526481 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:52:27.526498 | orchestrator | 2026-03-18 02:52:27.526515 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-18 02:52:27.526533 | orchestrator | Wednesday 18 March 2026 02:52:25 +0000 (0:00:00.708) 0:00:32.735 ******* 2026-03-18 02:52:27.526553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:27.526603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:34.511058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:34.511281 | orchestrator | 2026-03-18 02:52:34.511299 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-18 02:52:34.511310 | orchestrator | Wednesday 18 March 2026 02:52:27 +0000 (0:00:01.647) 0:00:34.383 ******* 2026-03-18 02:52:34.511319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:34.511329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:34.511363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:52:34.511372 | orchestrator | 2026-03-18 02:52:34.511397 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-18 02:52:34.511405 | orchestrator | Wednesday 18 March 2026 02:52:29 +0000 (0:00:02.333) 0:00:36.716 ******* 2026-03-18 02:52:34.511432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-18 02:52:34.511442 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-18 02:52:34.511450 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-18 02:52:34.511458 | orchestrator | 2026-03-18 02:52:34.511466 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-18 02:52:34.511473 | orchestrator | Wednesday 18 March 2026 02:52:31 +0000 (0:00:01.429) 0:00:38.146 ******* 2026-03-18 02:52:34.511482 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:52:34.511491 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:52:34.511499 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:52:34.511506 | orchestrator | 2026-03-18 02:52:34.511514 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-18 02:52:34.511522 | orchestrator | Wednesday 18 March 2026 02:52:32 +0000 (0:00:01.349) 0:00:39.496 ******* 2026-03-18 02:52:34.511530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:34.511541 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:52:34.511551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:34.511567 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:52:34.511577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-18 02:52:34.511587 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:52:34.511596 | orchestrator | 2026-03-18 02:52:34.511605 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-18 02:52:34.511614 | orchestrator | Wednesday 18 March 2026 02:52:33 +0000 (0:00:00.808) 0:00:40.305 ******* 2026-03-18 02:52:34.511638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:53:00.994609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:53:00.994766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-18 02:53:00.994794 | orchestrator | 2026-03-18 02:53:00.994801 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-18 02:53:00.994807 | orchestrator | Wednesday 18 March 2026 02:52:34 +0000 (0:00:01.069) 0:00:41.374 ******* 2026-03-18 02:53:00.994815 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:53:00.994824 | orchestrator | 2026-03-18 02:53:00.994832 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-18 02:53:00.994839 | orchestrator | Wednesday 18 March 2026 02:52:36 +0000 (0:00:02.130) 0:00:43.504 ******* 2026-03-18 02:53:00.994846 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:53:00.994855 | orchestrator | 2026-03-18 02:53:00.994906 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-18 02:53:00.994913 | orchestrator | Wednesday 18 March 2026 02:52:38 +0000 (0:00:02.175) 0:00:45.679 ******* 2026-03-18 02:53:00.994918 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:53:00.994922 | orchestrator | 2026-03-18 02:53:00.994927 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-18 02:53:00.994932 | orchestrator | Wednesday 18 March 2026 02:52:52 +0000 (0:00:14.032) 0:00:59.712 ******* 2026-03-18 02:53:00.994936 | orchestrator | 2026-03-18 02:53:00.994941 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-18 02:53:00.994945 | orchestrator | Wednesday 18 March 2026 02:52:52 +0000 (0:00:00.071) 0:00:59.783 ******* 2026-03-18 02:53:00.994950 | orchestrator | 2026-03-18 02:53:00.994954 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-18 02:53:00.994959 | orchestrator | Wednesday 18 March 2026 02:52:52 +0000 (0:00:00.071) 0:00:59.854 ******* 2026-03-18 02:53:00.994963 | orchestrator | 2026-03-18 02:53:00.994968 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-18 02:53:00.994972 | orchestrator | Wednesday 18 March 2026 02:52:53 +0000 (0:00:00.079) 0:00:59.934 ******* 2026-03-18 02:53:00.994977 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:53:00.994981 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:53:00.994986 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:53:00.994990 | orchestrator | 2026-03-18 02:53:00.994995 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:53:00.995012 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 02:53:00.995019 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:53:00.995023 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 02:53:00.995028 | orchestrator | 2026-03-18 02:53:00.995033 | orchestrator | 2026-03-18 02:53:00.995037 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:53:00.995042 | orchestrator | Wednesday 18 March 2026 02:53:00 +0000 (0:00:07.529) 0:01:07.463 ******* 2026-03-18 02:53:00.995046 | orchestrator | =============================================================================== 2026-03-18 02:53:00.995051 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.03s 2026-03-18 02:53:00.995069 | orchestrator | placement : Restart placement-api container ----------------------------- 7.53s 2026-03-18 02:53:00.995079 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.58s 2026-03-18 02:53:00.995084 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.11s 2026-03-18 02:53:00.995089 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.83s 2026-03-18 02:53:00.995094 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.79s 2026-03-18 02:53:00.995098 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.76s 2026-03-18 02:53:00.995118 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.24s 2026-03-18 02:53:00.995123 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-03-18 02:53:00.995128 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.18s 2026-03-18 02:53:00.995132 | orchestrator | placement : Creating placement databases -------------------------------- 2.13s 2026-03-18 02:53:00.995137 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.74s 2026-03-18 02:53:00.995141 | orchestrator | placement : Copying over config.json files for services ----------------- 1.65s 2026-03-18 02:53:00.995146 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-03-18 02:53:00.995151 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-03-18 02:53:00.995157 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.08s 2026-03-18 02:53:00.995162 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2026-03-18 02:53:00.995167 | orchestrator | placement : Copying over existing policy file --------------------------- 0.81s 2026-03-18 02:53:00.995172 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-03-18 02:53:00.995177 | orchestrator | placement : include_tasks ----------------------------------------------- 0.64s 2026-03-18 02:53:03.568516 | orchestrator | 2026-03-18 02:53:03 | INFO  | Task 5348d904-e57f-46ee-900e-428d479384d4 (neutron) was prepared for execution. 2026-03-18 02:53:03.568614 | orchestrator | 2026-03-18 02:53:03 | INFO  | It takes a moment until task 5348d904-e57f-46ee-900e-428d479384d4 (neutron) has been started and output is visible here. 2026-03-18 02:53:53.216717 | orchestrator | 2026-03-18 02:53:53.216864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:53:53.216881 | orchestrator | 2026-03-18 02:53:53.216894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:53:53.216906 | orchestrator | Wednesday 18 March 2026 02:53:08 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-03-18 02:53:53.216917 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:53:53.216930 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:53:53.216941 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:53:53.216951 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:53:53.216962 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:53:53.216973 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:53:53.216984 | orchestrator | 2026-03-18 02:53:53.216995 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:53:53.217006 | orchestrator | Wednesday 18 March 2026 02:53:08 +0000 (0:00:00.777) 0:00:01.044 ******* 2026-03-18 02:53:53.217017 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-18 02:53:53.217028 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-18 02:53:53.217039 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-18 02:53:53.217050 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-18 02:53:53.217061 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-18 02:53:53.217072 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-18 02:53:53.217083 | orchestrator | 2026-03-18 02:53:53.217094 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-18 02:53:53.217137 | orchestrator | 2026-03-18 02:53:53.217176 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-18 02:53:53.217190 | orchestrator | Wednesday 18 March 2026 02:53:09 +0000 (0:00:00.697) 0:00:01.742 ******* 2026-03-18 02:53:53.217211 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:53:53.217231 | orchestrator | 2026-03-18 02:53:53.217250 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-18 02:53:53.217267 | orchestrator | Wednesday 18 March 2026 02:53:10 +0000 (0:00:01.397) 0:00:03.140 ******* 2026-03-18 02:53:53.217307 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:53:53.217326 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:53:53.217346 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:53:53.217364 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:53:53.217381 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:53:53.217400 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:53:53.217419 | orchestrator | 2026-03-18 02:53:53.217437 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-18 02:53:53.217456 | orchestrator | Wednesday 18 March 2026 02:53:12 +0000 (0:00:01.388) 0:00:04.529 ******* 2026-03-18 02:53:53.217476 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:53:53.217488 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:53:53.217499 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:53:53.217509 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:53:53.217520 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:53:53.217530 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:53:53.217541 | orchestrator | 2026-03-18 02:53:53.217552 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-18 02:53:53.217562 | orchestrator | Wednesday 18 March 2026 02:53:13 +0000 (0:00:01.125) 0:00:05.655 ******* 2026-03-18 02:53:53.217573 | orchestrator | ok: [testbed-node-0] => { 2026-03-18 02:53:53.217586 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217597 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217608 | orchestrator | } 2026-03-18 02:53:53.217619 | orchestrator | ok: [testbed-node-1] => { 2026-03-18 02:53:53.217630 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217640 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217651 | orchestrator | } 2026-03-18 02:53:53.217662 | orchestrator | ok: [testbed-node-2] => { 2026-03-18 02:53:53.217672 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217683 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217694 | orchestrator | } 2026-03-18 02:53:53.217704 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 02:53:53.217715 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217726 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217736 | orchestrator | } 2026-03-18 02:53:53.217747 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 02:53:53.217758 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217768 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217779 | orchestrator | } 2026-03-18 02:53:53.217791 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 02:53:53.217802 | orchestrator |  "changed": false, 2026-03-18 02:53:53.217813 | orchestrator |  "msg": "All assertions passed" 2026-03-18 02:53:53.217824 | orchestrator | } 2026-03-18 02:53:53.217834 | orchestrator | 2026-03-18 02:53:53.217845 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-18 02:53:53.217856 | orchestrator | Wednesday 18 March 2026 02:53:14 +0000 (0:00:00.895) 0:00:06.550 ******* 2026-03-18 02:53:53.217867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:53:53.217877 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:53:53.217888 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:53:53.217899 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:53:53.217909 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:53:53.217920 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:53:53.217930 | orchestrator | 2026-03-18 02:53:53.217941 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-18 02:53:53.217963 | orchestrator | Wednesday 18 March 2026 02:53:14 +0000 (0:00:00.665) 0:00:07.216 ******* 2026-03-18 02:53:53.217975 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-18 02:53:53.217986 | orchestrator | 2026-03-18 02:53:53.217996 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-18 02:53:53.218007 | orchestrator | Wednesday 18 March 2026 02:53:18 +0000 (0:00:03.808) 0:00:11.024 ******* 2026-03-18 02:53:53.218089 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-18 02:53:53.218175 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-18 02:53:53.218189 | orchestrator | 2026-03-18 02:53:53.218226 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-18 02:53:53.218238 | orchestrator | Wednesday 18 March 2026 02:53:25 +0000 (0:00:06.655) 0:00:17.680 ******* 2026-03-18 02:53:53.218249 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 02:53:53.218260 | orchestrator | 2026-03-18 02:53:53.218271 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-18 02:53:53.218282 | orchestrator | Wednesday 18 March 2026 02:53:28 +0000 (0:00:03.172) 0:00:20.852 ******* 2026-03-18 02:53:53.218293 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 02:53:53.218304 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-18 02:53:53.218316 | orchestrator | 2026-03-18 02:53:53.218336 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-18 02:53:53.218356 | orchestrator | Wednesday 18 March 2026 02:53:32 +0000 (0:00:03.689) 0:00:24.542 ******* 2026-03-18 02:53:53.218376 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 02:53:53.218395 | orchestrator | 2026-03-18 02:53:53.218414 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-18 02:53:53.218436 | orchestrator | Wednesday 18 March 2026 02:53:35 +0000 (0:00:03.200) 0:00:27.743 ******* 2026-03-18 02:53:53.218456 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-18 02:53:53.218475 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-18 02:53:53.218496 | orchestrator | 2026-03-18 02:53:53.218515 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-18 02:53:53.218534 | orchestrator | Wednesday 18 March 2026 02:53:43 +0000 (0:00:08.241) 0:00:35.985 ******* 2026-03-18 02:53:53.218545 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:53:53.218556 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:53:53.218566 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:53:53.218577 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:53:53.218588 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:53:53.218598 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:53:53.218609 | orchestrator | 2026-03-18 02:53:53.218619 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-18 02:53:53.218630 | orchestrator | Wednesday 18 March 2026 02:53:44 +0000 (0:00:00.825) 0:00:36.811 ******* 2026-03-18 02:53:53.218649 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:53:53.218661 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:53:53.218671 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:53:53.218682 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:53:53.218692 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:53:53.218703 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:53:53.218713 | orchestrator | 2026-03-18 02:53:53.218724 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-18 02:53:53.218735 | orchestrator | Wednesday 18 March 2026 02:53:46 +0000 (0:00:02.337) 0:00:39.148 ******* 2026-03-18 02:53:53.218746 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:53:53.218756 | orchestrator | ok: [testbed-node-1] 2026-03-18 02:53:53.218768 | orchestrator | ok: [testbed-node-2] 2026-03-18 02:53:53.218799 | orchestrator | ok: [testbed-node-3] 2026-03-18 02:53:53.218815 | orchestrator | ok: [testbed-node-4] 2026-03-18 02:53:53.218830 | orchestrator | ok: [testbed-node-5] 2026-03-18 02:53:53.218848 | orchestrator | 2026-03-18 02:53:53.218868 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-18 02:53:53.218886 | orchestrator | Wednesday 18 March 2026 02:53:48 +0000 (0:00:01.282) 0:00:40.431 ******* 2026-03-18 02:53:53.218904 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:53:53.218922 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:53:53.218933 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:53:53.218944 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:53:53.218954 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:53:53.218965 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:53:53.218975 | orchestrator | 2026-03-18 02:53:53.218986 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-18 02:53:53.218997 | orchestrator | Wednesday 18 March 2026 02:53:50 +0000 (0:00:02.182) 0:00:42.613 ******* 2026-03-18 02:53:53.219012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:53.219042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:58.839749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:58.839863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:53:58.839890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:53:58.839898 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:53:58.839906 | orchestrator | 2026-03-18 02:53:58.839914 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-18 02:53:58.839921 | orchestrator | Wednesday 18 March 2026 02:53:53 +0000 (0:00:02.849) 0:00:45.463 ******* 2026-03-18 02:53:58.839928 | orchestrator | [WARNING]: Skipped 2026-03-18 02:53:58.839936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-18 02:53:58.839943 | orchestrator | due to this access issue: 2026-03-18 02:53:58.839951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-18 02:53:58.839957 | orchestrator | a directory 2026-03-18 02:53:58.839964 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 02:53:58.839970 | orchestrator | 2026-03-18 02:53:58.839981 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-18 02:53:58.839992 | orchestrator | Wednesday 18 March 2026 02:53:54 +0000 (0:00:00.902) 0:00:46.366 ******* 2026-03-18 02:53:58.840004 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 02:53:58.840016 | orchestrator | 2026-03-18 02:53:58.840026 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-18 02:53:58.840054 | orchestrator | Wednesday 18 March 2026 02:53:55 +0000 (0:00:01.402) 0:00:47.768 ******* 2026-03-18 02:53:58.840064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:58.840089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:53:58.840126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:58.840139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:53:58.840159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:03.826308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:03.826426 | orchestrator | 2026-03-18 02:54:03.826440 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-18 02:54:03.826450 | orchestrator | Wednesday 18 March 2026 02:53:58 +0000 (0:00:03.314) 0:00:51.082 ******* 2026-03-18 02:54:03.826475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:03.826487 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:03.826498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:03.826507 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:03.826516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:03.826525 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:03.826549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:03.826566 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:03.826580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:03.826589 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:03.826598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:03.826607 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:03.826616 | orchestrator | 2026-03-18 02:54:03.826641 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-18 02:54:03.826660 | orchestrator | Wednesday 18 March 2026 02:54:00 +0000 (0:00:02.104) 0:00:53.187 ******* 2026-03-18 02:54:03.826670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:03.826679 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:03.826694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:09.735356 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:09.735492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:09.735533 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:09.735543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:09.735551 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:09.735559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:09.735566 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:09.735573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:09.735581 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:09.735588 | orchestrator | 2026-03-18 02:54:09.735615 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-18 02:54:09.735624 | orchestrator | Wednesday 18 March 2026 02:54:03 +0000 (0:00:02.886) 0:00:56.074 ******* 2026-03-18 02:54:09.735631 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:09.735638 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:09.735645 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:09.735651 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:09.735658 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:09.735664 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:09.735671 | orchestrator | 2026-03-18 02:54:09.735678 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-18 02:54:09.735685 | orchestrator | Wednesday 18 March 2026 02:54:06 +0000 (0:00:02.446) 0:00:58.521 ******* 2026-03-18 02:54:09.735692 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:09.735698 | orchestrator | 2026-03-18 02:54:09.735705 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-18 02:54:09.735725 | orchestrator | Wednesday 18 March 2026 02:54:06 +0000 (0:00:00.160) 0:00:58.682 ******* 2026-03-18 02:54:09.735733 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:09.735739 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:09.735746 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:09.735753 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:09.735759 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:09.735766 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:09.735772 | orchestrator | 2026-03-18 02:54:09.735779 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-18 02:54:09.735786 | orchestrator | Wednesday 18 March 2026 02:54:07 +0000 (0:00:00.614) 0:00:59.296 ******* 2026-03-18 02:54:09.735797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:09.735805 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:09.735812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:09.735819 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:09.735826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:09.735839 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:09.735847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:09.735854 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:09.735867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:18.466458 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:18.466611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:18.466630 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:18.466644 | orchestrator | 2026-03-18 02:54:18.466658 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-18 02:54:18.466671 | orchestrator | Wednesday 18 March 2026 02:54:09 +0000 (0:00:02.675) 0:01:01.972 ******* 2026-03-18 02:54:18.466683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:18.466717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:18.466730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:18.466796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:18.466810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:18.466821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:18.466840 | orchestrator | 2026-03-18 02:54:18.466851 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-18 02:54:18.466861 | orchestrator | Wednesday 18 March 2026 02:54:12 +0000 (0:00:03.267) 0:01:05.239 ******* 2026-03-18 02:54:18.466872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:18.466883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:18.466906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:24.008792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:24.008935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:24.008948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:54:24.008956 | orchestrator | 2026-03-18 02:54:24.008965 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-18 02:54:24.008974 | orchestrator | Wednesday 18 March 2026 02:54:18 +0000 (0:00:05.472) 0:01:10.712 ******* 2026-03-18 02:54:24.008983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:24.008991 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:24.009031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:24.009045 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:24.009052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:24.009059 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:24.009066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:24.009073 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:24.009080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:24.009087 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:24.009094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:24.009120 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:24.009127 | orchestrator | 2026-03-18 02:54:24.009137 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-18 02:54:24.009144 | orchestrator | Wednesday 18 March 2026 02:54:21 +0000 (0:00:02.754) 0:01:13.466 ******* 2026-03-18 02:54:24.009149 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:24.009156 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:24.009162 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:54:24.009174 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:24.009180 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:54:24.009186 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:54:24.009192 | orchestrator | 2026-03-18 02:54:24.009199 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-18 02:54:24.009213 | orchestrator | Wednesday 18 March 2026 02:54:23 +0000 (0:00:02.785) 0:01:16.251 ******* 2026-03-18 02:54:43.945511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:43.945661 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.945697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:43.945718 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.945739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:43.945759 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.945779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:43.945893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:43.945955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:54:43.946159 | orchestrator | 2026-03-18 02:54:43.946186 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-18 02:54:43.946208 | orchestrator | Wednesday 18 March 2026 02:54:27 +0000 (0:00:03.484) 0:01:19.736 ******* 2026-03-18 02:54:43.946226 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946246 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946263 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946281 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.946298 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.946315 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.946353 | orchestrator | 2026-03-18 02:54:43.946387 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-18 02:54:43.946403 | orchestrator | Wednesday 18 March 2026 02:54:29 +0000 (0:00:02.453) 0:01:22.189 ******* 2026-03-18 02:54:43.946414 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946425 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946435 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946452 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.946470 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.946489 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.946507 | orchestrator | 2026-03-18 02:54:43.946526 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-18 02:54:43.946545 | orchestrator | Wednesday 18 March 2026 02:54:32 +0000 (0:00:02.272) 0:01:24.461 ******* 2026-03-18 02:54:43.946565 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946585 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946603 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946624 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.946643 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.946662 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.946682 | orchestrator | 2026-03-18 02:54:43.946703 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-18 02:54:43.946721 | orchestrator | Wednesday 18 March 2026 02:54:34 +0000 (0:00:02.405) 0:01:26.867 ******* 2026-03-18 02:54:43.946740 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946753 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946781 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946793 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.946805 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.946817 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.946829 | orchestrator | 2026-03-18 02:54:43.946841 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-18 02:54:43.946853 | orchestrator | Wednesday 18 March 2026 02:54:36 +0000 (0:00:02.261) 0:01:29.128 ******* 2026-03-18 02:54:43.946866 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946878 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946888 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946899 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.946909 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.946920 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.946930 | orchestrator | 2026-03-18 02:54:43.946941 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-18 02:54:43.946951 | orchestrator | Wednesday 18 March 2026 02:54:39 +0000 (0:00:02.434) 0:01:31.563 ******* 2026-03-18 02:54:43.946962 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.946972 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.946983 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.946993 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:43.947004 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.947014 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:43.947025 | orchestrator | 2026-03-18 02:54:43.947035 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-18 02:54:43.947046 | orchestrator | Wednesday 18 March 2026 02:54:41 +0000 (0:00:02.330) 0:01:33.893 ******* 2026-03-18 02:54:43.947084 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:43.947096 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:43.947151 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:43.947169 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:43.947185 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:43.947218 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:43.947238 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:43.947252 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:43.947282 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:48.501628 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:48.501726 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-18 02:54:48.501742 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:48.501755 | orchestrator | 2026-03-18 02:54:48.501767 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-18 02:54:48.501779 | orchestrator | Wednesday 18 March 2026 02:54:43 +0000 (0:00:02.288) 0:01:36.182 ******* 2026-03-18 02:54:48.501794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:48.501836 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:54:48.501849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:48.501861 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:48.501873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:48.501885 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:48.501913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:48.501926 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:54:48.501956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:48.501968 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:54:48.501980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:54:48.501998 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:54:48.502008 | orchestrator | 2026-03-18 02:54:48.502089 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-18 02:54:48.502124 | orchestrator | Wednesday 18 March 2026 02:54:46 +0000 (0:00:02.249) 0:01:38.432 ******* 2026-03-18 02:54:48.502136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:48.502147 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:54:48.502163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:54:48.502175 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:54:48.502199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:55:16.221843 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.221983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:16.222209 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.222239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:16.222258 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.222274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:16.222289 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.222306 | orchestrator | 2026-03-18 02:55:16.222325 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-18 02:55:16.222345 | orchestrator | Wednesday 18 March 2026 02:54:48 +0000 (0:00:02.317) 0:01:40.749 ******* 2026-03-18 02:55:16.222362 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.222379 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.222396 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.222413 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.222431 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.222449 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.222468 | orchestrator | 2026-03-18 02:55:16.222485 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-18 02:55:16.222502 | orchestrator | Wednesday 18 March 2026 02:54:50 +0000 (0:00:02.470) 0:01:43.220 ******* 2026-03-18 02:55:16.222518 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.222536 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.222573 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.222592 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:55:16.222610 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:55:16.222627 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:55:16.222645 | orchestrator | 2026-03-18 02:55:16.222662 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-18 02:55:16.222681 | orchestrator | Wednesday 18 March 2026 02:54:54 +0000 (0:00:04.013) 0:01:47.234 ******* 2026-03-18 02:55:16.222700 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.222718 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.222750 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.222768 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.222785 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.222801 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.222817 | orchestrator | 2026-03-18 02:55:16.222834 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-18 02:55:16.222852 | orchestrator | Wednesday 18 March 2026 02:54:57 +0000 (0:00:02.166) 0:01:49.400 ******* 2026-03-18 02:55:16.222869 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.222887 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.222904 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.222921 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.222938 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.222955 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.222972 | orchestrator | 2026-03-18 02:55:16.222990 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-18 02:55:16.223031 | orchestrator | Wednesday 18 March 2026 02:54:59 +0000 (0:00:02.403) 0:01:51.804 ******* 2026-03-18 02:55:16.223050 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223066 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223084 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223100 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223145 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223163 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223179 | orchestrator | 2026-03-18 02:55:16.223195 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-18 02:55:16.223207 | orchestrator | Wednesday 18 March 2026 02:55:01 +0000 (0:00:02.255) 0:01:54.059 ******* 2026-03-18 02:55:16.223217 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223227 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223236 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223245 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223255 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223264 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223274 | orchestrator | 2026-03-18 02:55:16.223283 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-18 02:55:16.223293 | orchestrator | Wednesday 18 March 2026 02:55:04 +0000 (0:00:02.397) 0:01:56.456 ******* 2026-03-18 02:55:16.223302 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223312 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223321 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223331 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223341 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223350 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223359 | orchestrator | 2026-03-18 02:55:16.223367 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-18 02:55:16.223375 | orchestrator | Wednesday 18 March 2026 02:55:06 +0000 (0:00:02.439) 0:01:58.896 ******* 2026-03-18 02:55:16.223383 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223390 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223398 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223406 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223413 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223421 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223429 | orchestrator | 2026-03-18 02:55:16.223437 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-18 02:55:16.223447 | orchestrator | Wednesday 18 March 2026 02:55:09 +0000 (0:00:02.387) 0:02:01.283 ******* 2026-03-18 02:55:16.223460 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223473 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223486 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223498 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223509 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223534 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223549 | orchestrator | 2026-03-18 02:55:16.223562 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-18 02:55:16.223576 | orchestrator | Wednesday 18 March 2026 02:55:11 +0000 (0:00:02.662) 0:02:03.946 ******* 2026-03-18 02:55:16.223590 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223605 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:16.223619 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223632 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223646 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223660 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223673 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:16.223687 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:16.223700 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223714 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:16.223727 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-18 02:55:16.223741 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:16.223754 | orchestrator | 2026-03-18 02:55:16.223767 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-18 02:55:16.223781 | orchestrator | Wednesday 18 March 2026 02:55:13 +0000 (0:00:01.999) 0:02:05.946 ******* 2026-03-18 02:55:16.223807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:55:16.223823 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:55:16.223848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:55:19.505001 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:55:19.505098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-18 02:55:19.505187 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:55:19.505199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:19.505207 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:55:19.505230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:19.505237 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:55:19.505244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 02:55:19.505251 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:55:19.505257 | orchestrator | 2026-03-18 02:55:19.505265 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-18 02:55:19.505272 | orchestrator | Wednesday 18 March 2026 02:55:16 +0000 (0:00:02.517) 0:02:08.463 ******* 2026-03-18 02:55:19.505297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:55:19.505314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:55:19.505320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-18 02:55:19.505332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:55:19.505339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:55:19.505352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-18 02:57:32.903313 | orchestrator | 2026-03-18 02:57:32.903456 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-18 02:57:32.903480 | orchestrator | Wednesday 18 March 2026 02:55:19 +0000 (0:00:03.290) 0:02:11.754 ******* 2026-03-18 02:57:32.903500 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:57:32.903521 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:57:32.903540 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:57:32.903560 | orchestrator | skipping: [testbed-node-3] 2026-03-18 02:57:32.903579 | orchestrator | skipping: [testbed-node-4] 2026-03-18 02:57:32.903600 | orchestrator | skipping: [testbed-node-5] 2026-03-18 02:57:32.903620 | orchestrator | 2026-03-18 02:57:32.903637 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-18 02:57:32.903648 | orchestrator | Wednesday 18 March 2026 02:55:20 +0000 (0:00:00.845) 0:02:12.599 ******* 2026-03-18 02:57:32.903660 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:57:32.903700 | orchestrator | 2026-03-18 02:57:32.903712 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-18 02:57:32.903725 | orchestrator | Wednesday 18 March 2026 02:55:22 +0000 (0:00:02.091) 0:02:14.691 ******* 2026-03-18 02:57:32.903737 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:57:32.903750 | orchestrator | 2026-03-18 02:57:32.903762 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-18 02:57:32.903775 | orchestrator | Wednesday 18 March 2026 02:55:24 +0000 (0:00:02.202) 0:02:16.894 ******* 2026-03-18 02:57:32.903787 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:57:32.903799 | orchestrator | 2026-03-18 02:57:32.903812 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.903824 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:41.724) 0:02:58.618 ******* 2026-03-18 02:57:32.903836 | orchestrator | 2026-03-18 02:57:32.903849 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.903863 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.074) 0:02:58.693 ******* 2026-03-18 02:57:32.903875 | orchestrator | 2026-03-18 02:57:32.903887 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.903899 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.073) 0:02:58.767 ******* 2026-03-18 02:57:32.903911 | orchestrator | 2026-03-18 02:57:32.903923 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.903936 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.073) 0:02:58.841 ******* 2026-03-18 02:57:32.903948 | orchestrator | 2026-03-18 02:57:32.903960 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.903973 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.075) 0:02:58.916 ******* 2026-03-18 02:57:32.903985 | orchestrator | 2026-03-18 02:57:32.903997 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-18 02:57:32.904027 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.074) 0:02:58.991 ******* 2026-03-18 02:57:32.904040 | orchestrator | 2026-03-18 02:57:32.904052 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-18 02:57:32.904064 | orchestrator | Wednesday 18 March 2026 02:56:06 +0000 (0:00:00.076) 0:02:59.068 ******* 2026-03-18 02:57:32.904077 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:57:32.904089 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:57:32.904100 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:57:32.904135 | orchestrator | 2026-03-18 02:57:32.904146 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-18 02:57:32.904157 | orchestrator | Wednesday 18 March 2026 02:56:31 +0000 (0:00:24.809) 0:03:23.877 ******* 2026-03-18 02:57:32.904168 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:57:32.904178 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:57:32.904189 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:57:32.904200 | orchestrator | 2026-03-18 02:57:32.904211 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 02:57:32.904222 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:57:32.904234 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-18 02:57:32.904245 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-18 02:57:32.904256 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:57:32.904268 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:57:32.904278 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-18 02:57:32.904289 | orchestrator | 2026-03-18 02:57:32.904300 | orchestrator | 2026-03-18 02:57:32.904311 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 02:57:32.904321 | orchestrator | Wednesday 18 March 2026 02:57:32 +0000 (0:01:00.717) 0:04:24.595 ******* 2026-03-18 02:57:32.904332 | orchestrator | =============================================================================== 2026-03-18 02:57:32.904343 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 60.72s 2026-03-18 02:57:32.904353 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.72s 2026-03-18 02:57:32.904364 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.81s 2026-03-18 02:57:32.904395 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.24s 2026-03-18 02:57:32.904407 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.66s 2026-03-18 02:57:32.904424 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.47s 2026-03-18 02:57:32.904444 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.01s 2026-03-18 02:57:32.904462 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.81s 2026-03-18 02:57:32.904483 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.69s 2026-03-18 02:57:32.904503 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.48s 2026-03-18 02:57:32.904522 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.31s 2026-03-18 02:57:32.904533 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.29s 2026-03-18 02:57:32.904544 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.27s 2026-03-18 02:57:32.904554 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.20s 2026-03-18 02:57:32.904565 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.17s 2026-03-18 02:57:32.904576 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.89s 2026-03-18 02:57:32.904586 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.85s 2026-03-18 02:57:32.904597 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.79s 2026-03-18 02:57:32.904607 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 2.75s 2026-03-18 02:57:32.904627 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.68s 2026-03-18 02:57:36.895727 | orchestrator | 2026-03-18 02:57:36 | INFO  | Task e2f12121-8ece-4530-8878-34ff269fa26a (nova) was prepared for execution. 2026-03-18 02:57:36.895834 | orchestrator | 2026-03-18 02:57:36 | INFO  | It takes a moment until task e2f12121-8ece-4530-8878-34ff269fa26a (nova) has been started and output is visible here. 2026-03-18 02:59:33.704636 | orchestrator | 2026-03-18 02:59:33.704749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 02:59:33.704765 | orchestrator | 2026-03-18 02:59:33.704775 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-18 02:59:33.704787 | orchestrator | Wednesday 18 March 2026 02:57:41 +0000 (0:00:00.319) 0:00:00.319 ******* 2026-03-18 02:59:33.704815 | orchestrator | changed: [testbed-manager] 2026-03-18 02:59:33.704827 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.704837 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:59:33.704847 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:59:33.704857 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:59:33.704866 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:59:33.704877 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:59:33.704886 | orchestrator | 2026-03-18 02:59:33.704896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 02:59:33.704906 | orchestrator | Wednesday 18 March 2026 02:57:42 +0000 (0:00:00.921) 0:00:01.240 ******* 2026-03-18 02:59:33.704916 | orchestrator | changed: [testbed-manager] 2026-03-18 02:59:33.704926 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.704936 | orchestrator | changed: [testbed-node-1] 2026-03-18 02:59:33.704946 | orchestrator | changed: [testbed-node-2] 2026-03-18 02:59:33.704956 | orchestrator | changed: [testbed-node-3] 2026-03-18 02:59:33.704966 | orchestrator | changed: [testbed-node-4] 2026-03-18 02:59:33.704975 | orchestrator | changed: [testbed-node-5] 2026-03-18 02:59:33.704983 | orchestrator | 2026-03-18 02:59:33.704999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 02:59:33.705009 | orchestrator | Wednesday 18 March 2026 02:57:43 +0000 (0:00:00.880) 0:00:02.120 ******* 2026-03-18 02:59:33.705018 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-18 02:59:33.705029 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-18 02:59:33.705038 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-18 02:59:33.705048 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-18 02:59:33.705058 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-18 02:59:33.705067 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-18 02:59:33.705077 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-18 02:59:33.705086 | orchestrator | 2026-03-18 02:59:33.705096 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-18 02:59:33.705105 | orchestrator | 2026-03-18 02:59:33.705115 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-18 02:59:33.705125 | orchestrator | Wednesday 18 March 2026 02:57:44 +0000 (0:00:00.781) 0:00:02.902 ******* 2026-03-18 02:59:33.705136 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:59:33.705155 | orchestrator | 2026-03-18 02:59:33.705161 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-18 02:59:33.705167 | orchestrator | Wednesday 18 March 2026 02:57:45 +0000 (0:00:00.797) 0:00:03.699 ******* 2026-03-18 02:59:33.705175 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-18 02:59:33.705182 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-18 02:59:33.705189 | orchestrator | 2026-03-18 02:59:33.705195 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-18 02:59:33.705223 | orchestrator | Wednesday 18 March 2026 02:57:49 +0000 (0:00:03.990) 0:00:07.689 ******* 2026-03-18 02:59:33.705231 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:59:33.705238 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 02:59:33.705244 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705253 | orchestrator | 2026-03-18 02:59:33.705263 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-18 02:59:33.705271 | orchestrator | Wednesday 18 March 2026 02:57:53 +0000 (0:00:03.983) 0:00:11.673 ******* 2026-03-18 02:59:33.705286 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705308 | orchestrator | 2026-03-18 02:59:33.705341 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-18 02:59:33.705351 | orchestrator | Wednesday 18 March 2026 02:57:53 +0000 (0:00:00.682) 0:00:12.355 ******* 2026-03-18 02:59:33.705361 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705369 | orchestrator | 2026-03-18 02:59:33.705378 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-18 02:59:33.705388 | orchestrator | Wednesday 18 March 2026 02:57:55 +0000 (0:00:01.267) 0:00:13.623 ******* 2026-03-18 02:59:33.705397 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705407 | orchestrator | 2026-03-18 02:59:33.705418 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-18 02:59:33.705427 | orchestrator | Wednesday 18 March 2026 02:57:57 +0000 (0:00:02.741) 0:00:16.364 ******* 2026-03-18 02:59:33.705437 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.705447 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.705458 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.705467 | orchestrator | 2026-03-18 02:59:33.705477 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-18 02:59:33.705486 | orchestrator | Wednesday 18 March 2026 02:57:58 +0000 (0:00:00.348) 0:00:16.713 ******* 2026-03-18 02:59:33.705496 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:59:33.705507 | orchestrator | 2026-03-18 02:59:33.705517 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-18 02:59:33.705528 | orchestrator | Wednesday 18 March 2026 02:58:29 +0000 (0:00:31.477) 0:00:48.191 ******* 2026-03-18 02:59:33.705538 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705547 | orchestrator | 2026-03-18 02:59:33.705556 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-18 02:59:33.705566 | orchestrator | Wednesday 18 March 2026 02:58:43 +0000 (0:00:13.964) 0:01:02.155 ******* 2026-03-18 02:59:33.705575 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:59:33.705585 | orchestrator | 2026-03-18 02:59:33.705594 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-18 02:59:33.705605 | orchestrator | Wednesday 18 March 2026 02:58:55 +0000 (0:00:11.798) 0:01:13.954 ******* 2026-03-18 02:59:33.705634 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:59:33.705644 | orchestrator | 2026-03-18 02:59:33.705654 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-18 02:59:33.705664 | orchestrator | Wednesday 18 March 2026 02:58:56 +0000 (0:00:00.741) 0:01:14.696 ******* 2026-03-18 02:59:33.705673 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.705683 | orchestrator | 2026-03-18 02:59:33.705699 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-18 02:59:33.705709 | orchestrator | Wednesday 18 March 2026 02:58:56 +0000 (0:00:00.479) 0:01:15.175 ******* 2026-03-18 02:59:33.705720 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:59:33.705730 | orchestrator | 2026-03-18 02:59:33.705740 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-18 02:59:33.705749 | orchestrator | Wednesday 18 March 2026 02:58:57 +0000 (0:00:00.756) 0:01:15.932 ******* 2026-03-18 02:59:33.705759 | orchestrator | ok: [testbed-node-0] 2026-03-18 02:59:33.705768 | orchestrator | 2026-03-18 02:59:33.705778 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-18 02:59:33.705796 | orchestrator | Wednesday 18 March 2026 02:59:14 +0000 (0:00:17.308) 0:01:33.240 ******* 2026-03-18 02:59:33.705805 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.705815 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.705825 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.705835 | orchestrator | 2026-03-18 02:59:33.705845 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-18 02:59:33.705854 | orchestrator | 2026-03-18 02:59:33.705864 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-18 02:59:33.705873 | orchestrator | Wednesday 18 March 2026 02:59:14 +0000 (0:00:00.362) 0:01:33.603 ******* 2026-03-18 02:59:33.705882 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 02:59:33.705892 | orchestrator | 2026-03-18 02:59:33.705902 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-18 02:59:33.705912 | orchestrator | Wednesday 18 March 2026 02:59:15 +0000 (0:00:00.845) 0:01:34.448 ******* 2026-03-18 02:59:33.705922 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.705931 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.705941 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.705951 | orchestrator | 2026-03-18 02:59:33.705960 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-18 02:59:33.705969 | orchestrator | Wednesday 18 March 2026 02:59:17 +0000 (0:00:02.014) 0:01:36.463 ******* 2026-03-18 02:59:33.705978 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.705987 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.705995 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.706004 | orchestrator | 2026-03-18 02:59:33.706064 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-18 02:59:33.706075 | orchestrator | Wednesday 18 March 2026 02:59:19 +0000 (0:00:02.100) 0:01:38.563 ******* 2026-03-18 02:59:33.706086 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.706096 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706106 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706115 | orchestrator | 2026-03-18 02:59:33.706125 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-18 02:59:33.706136 | orchestrator | Wednesday 18 March 2026 02:59:20 +0000 (0:00:00.594) 0:01:39.157 ******* 2026-03-18 02:59:33.706145 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 02:59:33.706154 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706164 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 02:59:33.706175 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706185 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 02:59:33.706195 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-18 02:59:33.706206 | orchestrator | 2026-03-18 02:59:33.706216 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-18 02:59:33.706226 | orchestrator | Wednesday 18 March 2026 02:59:28 +0000 (0:00:07.460) 0:01:46.618 ******* 2026-03-18 02:59:33.706237 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.706246 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706257 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706268 | orchestrator | 2026-03-18 02:59:33.706278 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-18 02:59:33.706286 | orchestrator | Wednesday 18 March 2026 02:59:28 +0000 (0:00:00.341) 0:01:46.959 ******* 2026-03-18 02:59:33.706292 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-18 02:59:33.706298 | orchestrator | skipping: [testbed-node-0] 2026-03-18 02:59:33.706303 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 02:59:33.706309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706363 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 02:59:33.706370 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706382 | orchestrator | 2026-03-18 02:59:33.706388 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-18 02:59:33.706394 | orchestrator | Wednesday 18 March 2026 02:59:29 +0000 (0:00:01.214) 0:01:48.174 ******* 2026-03-18 02:59:33.706400 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706405 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706411 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.706416 | orchestrator | 2026-03-18 02:59:33.706422 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-18 02:59:33.706428 | orchestrator | Wednesday 18 March 2026 02:59:30 +0000 (0:00:00.477) 0:01:48.651 ******* 2026-03-18 02:59:33.706434 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706439 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706445 | orchestrator | changed: [testbed-node-0] 2026-03-18 02:59:33.706451 | orchestrator | 2026-03-18 02:59:33.706456 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-18 02:59:33.706462 | orchestrator | Wednesday 18 March 2026 02:59:31 +0000 (0:00:01.015) 0:01:49.666 ******* 2026-03-18 02:59:33.706468 | orchestrator | skipping: [testbed-node-1] 2026-03-18 02:59:33.706473 | orchestrator | skipping: [testbed-node-2] 2026-03-18 02:59:33.706486 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:00:51.277555 | orchestrator | 2026-03-18 03:00:51.277699 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-18 03:00:51.277727 | orchestrator | Wednesday 18 March 2026 02:59:33 +0000 (0:00:02.636) 0:01:52.303 ******* 2026-03-18 03:00:51.277746 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.277765 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.277784 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:00:51.277802 | orchestrator | 2026-03-18 03:00:51.277821 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-18 03:00:51.277839 | orchestrator | Wednesday 18 March 2026 02:59:54 +0000 (0:00:20.825) 0:02:13.128 ******* 2026-03-18 03:00:51.277857 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.277874 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.277892 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:00:51.277909 | orchestrator | 2026-03-18 03:00:51.277927 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-18 03:00:51.277945 | orchestrator | Wednesday 18 March 2026 03:00:06 +0000 (0:00:12.066) 0:02:25.195 ******* 2026-03-18 03:00:51.277963 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:00:51.277981 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.277997 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.278014 | orchestrator | 2026-03-18 03:00:51.278121 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-18 03:00:51.278141 | orchestrator | Wednesday 18 March 2026 03:00:07 +0000 (0:00:01.142) 0:02:26.337 ******* 2026-03-18 03:00:51.278195 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.278215 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.278236 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:00:51.278255 | orchestrator | 2026-03-18 03:00:51.278275 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-18 03:00:51.278293 | orchestrator | Wednesday 18 March 2026 03:00:19 +0000 (0:00:12.187) 0:02:38.525 ******* 2026-03-18 03:00:51.278313 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:51.278329 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.278347 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.278365 | orchestrator | 2026-03-18 03:00:51.278382 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-18 03:00:51.278399 | orchestrator | Wednesday 18 March 2026 03:00:21 +0000 (0:00:01.266) 0:02:39.792 ******* 2026-03-18 03:00:51.278418 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:51.278436 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:51.278454 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:51.278471 | orchestrator | 2026-03-18 03:00:51.278490 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-18 03:00:51.278539 | orchestrator | 2026-03-18 03:00:51.278557 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-18 03:00:51.278574 | orchestrator | Wednesday 18 March 2026 03:00:21 +0000 (0:00:00.353) 0:02:40.145 ******* 2026-03-18 03:00:51.278592 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:00:51.278612 | orchestrator | 2026-03-18 03:00:51.278629 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-18 03:00:51.278647 | orchestrator | Wednesday 18 March 2026 03:00:22 +0000 (0:00:00.832) 0:02:40.977 ******* 2026-03-18 03:00:51.278663 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-18 03:00:51.278680 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-18 03:00:51.278698 | orchestrator | 2026-03-18 03:00:51.278715 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-18 03:00:51.278732 | orchestrator | Wednesday 18 March 2026 03:00:25 +0000 (0:00:03.182) 0:02:44.160 ******* 2026-03-18 03:00:51.278750 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-18 03:00:51.278826 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-18 03:00:51.278848 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-18 03:00:51.278865 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-18 03:00:51.278883 | orchestrator | 2026-03-18 03:00:51.278900 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-18 03:00:51.278919 | orchestrator | Wednesday 18 March 2026 03:00:31 +0000 (0:00:06.187) 0:02:50.347 ******* 2026-03-18 03:00:51.278936 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:00:51.278954 | orchestrator | 2026-03-18 03:00:51.278971 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-18 03:00:51.278989 | orchestrator | Wednesday 18 March 2026 03:00:34 +0000 (0:00:03.135) 0:02:53.483 ******* 2026-03-18 03:00:51.279006 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:00:51.279023 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-18 03:00:51.279040 | orchestrator | 2026-03-18 03:00:51.279058 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-18 03:00:51.279076 | orchestrator | Wednesday 18 March 2026 03:00:38 +0000 (0:00:03.855) 0:02:57.338 ******* 2026-03-18 03:00:51.279094 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:00:51.279112 | orchestrator | 2026-03-18 03:00:51.279129 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-18 03:00:51.279147 | orchestrator | Wednesday 18 March 2026 03:00:42 +0000 (0:00:03.326) 0:03:00.665 ******* 2026-03-18 03:00:51.279201 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-18 03:00:51.279220 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-18 03:00:51.279236 | orchestrator | 2026-03-18 03:00:51.279254 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-18 03:00:51.279303 | orchestrator | Wednesday 18 March 2026 03:00:49 +0000 (0:00:07.852) 0:03:08.517 ******* 2026-03-18 03:00:51.279342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:51.279393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:51.279418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:51.279461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065445 | orchestrator | 2026-03-18 03:00:56.065460 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-18 03:00:56.065472 | orchestrator | Wednesday 18 March 2026 03:00:51 +0000 (0:00:01.363) 0:03:09.881 ******* 2026-03-18 03:00:56.065483 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:56.065495 | orchestrator | 2026-03-18 03:00:56.065506 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-18 03:00:56.065517 | orchestrator | Wednesday 18 March 2026 03:00:51 +0000 (0:00:00.169) 0:03:10.051 ******* 2026-03-18 03:00:56.065528 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:56.065539 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:56.065550 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:56.065560 | orchestrator | 2026-03-18 03:00:56.065571 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-18 03:00:56.065582 | orchestrator | Wednesday 18 March 2026 03:00:51 +0000 (0:00:00.320) 0:03:10.371 ******* 2026-03-18 03:00:56.065592 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:00:56.065603 | orchestrator | 2026-03-18 03:00:56.065614 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-18 03:00:56.065624 | orchestrator | Wednesday 18 March 2026 03:00:52 +0000 (0:00:00.763) 0:03:11.135 ******* 2026-03-18 03:00:56.065635 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:56.065646 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:56.065657 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:56.065667 | orchestrator | 2026-03-18 03:00:56.065678 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-18 03:00:56.065689 | orchestrator | Wednesday 18 March 2026 03:00:53 +0000 (0:00:00.585) 0:03:11.721 ******* 2026-03-18 03:00:56.065700 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:00:56.065711 | orchestrator | 2026-03-18 03:00:56.065722 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-18 03:00:56.065733 | orchestrator | Wednesday 18 March 2026 03:00:53 +0000 (0:00:00.640) 0:03:12.362 ******* 2026-03-18 03:00:56.065747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:56.065801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:56.065817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:00:56.065830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:00:56.065873 | orchestrator | 2026-03-18 03:00:56.065898 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-18 03:00:57.659714 | orchestrator | Wednesday 18 March 2026 03:00:56 +0000 (0:00:02.304) 0:03:14.667 ******* 2026-03-18 03:00:57.659825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:00:57.659853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:00:57.659868 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:57.659884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:00:57.659923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:00:57.659938 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:00:57.659987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:00:57.660003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:00:57.660015 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:00:57.660028 | orchestrator | 2026-03-18 03:00:57.660042 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-18 03:00:57.660055 | orchestrator | Wednesday 18 March 2026 03:00:56 +0000 (0:00:00.734) 0:03:15.402 ******* 2026-03-18 03:00:57.660069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:00:57.660091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:00:57.660106 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:00:57.660137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:01:00.055266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:01:00.055345 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:00.055357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:01:00.055383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:01:00.055390 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:00.055396 | orchestrator | 2026-03-18 03:01:00.055402 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-18 03:01:00.055410 | orchestrator | Wednesday 18 March 2026 03:00:57 +0000 (0:00:00.861) 0:03:16.263 ******* 2026-03-18 03:01:00.055429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:00.055450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:00.055458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:00.055468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:00.055479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:00.055489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:06.850095 | orchestrator | 2026-03-18 03:01:06.850244 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-18 03:01:06.850258 | orchestrator | Wednesday 18 March 2026 03:01:00 +0000 (0:00:02.395) 0:03:18.659 ******* 2026-03-18 03:01:06.850272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:06.850304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:06.850329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:06.850356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:06.850367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:06.850375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:06.850390 | orchestrator | 2026-03-18 03:01:06.850399 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-18 03:01:06.850407 | orchestrator | Wednesday 18 March 2026 03:01:06 +0000 (0:00:06.162) 0:03:24.821 ******* 2026-03-18 03:01:06.850416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:01:06.850429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:01:06.850438 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:06.850456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:01:11.254498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:01:11.254679 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:11.254716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-18 03:01:11.254760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:01:11.254781 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:11.254800 | orchestrator | 2026-03-18 03:01:11.254821 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-18 03:01:11.254841 | orchestrator | Wednesday 18 March 2026 03:01:06 +0000 (0:00:00.630) 0:03:25.452 ******* 2026-03-18 03:01:11.254860 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:01:11.254878 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:01:11.254896 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:01:11.254913 | orchestrator | 2026-03-18 03:01:11.254930 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-18 03:01:11.254950 | orchestrator | Wednesday 18 March 2026 03:01:08 +0000 (0:00:01.591) 0:03:27.044 ******* 2026-03-18 03:01:11.254970 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:11.254990 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:11.255009 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:11.255028 | orchestrator | 2026-03-18 03:01:11.255047 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-18 03:01:11.255067 | orchestrator | Wednesday 18 March 2026 03:01:08 +0000 (0:00:00.348) 0:03:27.392 ******* 2026-03-18 03:01:11.255208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:11.255254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:11.255279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-18 03:01:11.255294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:11.255309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:11.255340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:53.100889 | orchestrator | 2026-03-18 03:01:53.101012 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-18 03:01:53.101081 | orchestrator | Wednesday 18 March 2026 03:01:10 +0000 (0:00:01.976) 0:03:29.369 ******* 2026-03-18 03:01:53.101093 | orchestrator | 2026-03-18 03:01:53.101105 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-18 03:01:53.101116 | orchestrator | Wednesday 18 March 2026 03:01:10 +0000 (0:00:00.151) 0:03:29.520 ******* 2026-03-18 03:01:53.101127 | orchestrator | 2026-03-18 03:01:53.101138 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-18 03:01:53.101149 | orchestrator | Wednesday 18 March 2026 03:01:11 +0000 (0:00:00.179) 0:03:29.700 ******* 2026-03-18 03:01:53.101160 | orchestrator | 2026-03-18 03:01:53.101171 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-18 03:01:53.101181 | orchestrator | Wednesday 18 March 2026 03:01:11 +0000 (0:00:00.153) 0:03:29.853 ******* 2026-03-18 03:01:53.101192 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:01:53.101204 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:01:53.101215 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:01:53.101226 | orchestrator | 2026-03-18 03:01:53.101236 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-18 03:01:53.101247 | orchestrator | Wednesday 18 March 2026 03:01:30 +0000 (0:00:19.626) 0:03:49.480 ******* 2026-03-18 03:01:53.101258 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:01:53.101269 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:01:53.101279 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:01:53.101290 | orchestrator | 2026-03-18 03:01:53.101301 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-18 03:01:53.101311 | orchestrator | 2026-03-18 03:01:53.101322 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-18 03:01:53.101333 | orchestrator | Wednesday 18 March 2026 03:01:40 +0000 (0:00:10.117) 0:03:59.598 ******* 2026-03-18 03:01:53.101344 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:01:53.101356 | orchestrator | 2026-03-18 03:01:53.101367 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-18 03:01:53.101378 | orchestrator | Wednesday 18 March 2026 03:01:42 +0000 (0:00:01.333) 0:04:00.932 ******* 2026-03-18 03:01:53.101388 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:01:53.101399 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:01:53.101409 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:01:53.101420 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:53.101431 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:53.101463 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:53.101483 | orchestrator | 2026-03-18 03:01:53.101528 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-18 03:01:53.101545 | orchestrator | Wednesday 18 March 2026 03:01:43 +0000 (0:00:00.818) 0:04:01.750 ******* 2026-03-18 03:01:53.101561 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:53.101578 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:53.101595 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:53.101613 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:01:53.101631 | orchestrator | 2026-03-18 03:01:53.101648 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-18 03:01:53.101667 | orchestrator | Wednesday 18 March 2026 03:01:44 +0000 (0:00:00.916) 0:04:02.667 ******* 2026-03-18 03:01:53.101684 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-18 03:01:53.101704 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-18 03:01:53.101722 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-18 03:01:53.101740 | orchestrator | 2026-03-18 03:01:53.101759 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-18 03:01:53.101777 | orchestrator | Wednesday 18 March 2026 03:01:44 +0000 (0:00:00.923) 0:04:03.591 ******* 2026-03-18 03:01:53.101792 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-18 03:01:53.101804 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-18 03:01:53.101814 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-18 03:01:53.101825 | orchestrator | 2026-03-18 03:01:53.101836 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-18 03:01:53.101846 | orchestrator | Wednesday 18 March 2026 03:01:46 +0000 (0:00:01.154) 0:04:04.745 ******* 2026-03-18 03:01:53.101857 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-18 03:01:53.101867 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:01:53.101878 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-18 03:01:53.101889 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:01:53.101899 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-18 03:01:53.101910 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:01:53.101920 | orchestrator | 2026-03-18 03:01:53.101931 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-18 03:01:53.101941 | orchestrator | Wednesday 18 March 2026 03:01:46 +0000 (0:00:00.592) 0:04:05.337 ******* 2026-03-18 03:01:53.101952 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-18 03:01:53.101963 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-18 03:01:53.101973 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 03:01:53.101984 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 03:01:53.101994 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:53.102005 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 03:01:53.102114 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 03:01:53.102150 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-18 03:01:53.102162 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-18 03:01:53.102173 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-18 03:01:53.102183 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:53.102194 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 03:01:53.102205 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 03:01:53.102215 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:53.102226 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-18 03:01:53.102237 | orchestrator | 2026-03-18 03:01:53.102261 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-18 03:01:53.102272 | orchestrator | Wednesday 18 March 2026 03:01:47 +0000 (0:00:01.220) 0:04:06.558 ******* 2026-03-18 03:01:53.102282 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:53.102293 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:53.102304 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:53.102315 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:01:53.102326 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:01:53.102337 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:01:53.102347 | orchestrator | 2026-03-18 03:01:53.102358 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-18 03:01:53.102370 | orchestrator | Wednesday 18 March 2026 03:01:49 +0000 (0:00:01.236) 0:04:07.795 ******* 2026-03-18 03:01:53.102380 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:01:53.102391 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:01:53.102401 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:01:53.102412 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:01:53.102423 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:01:53.102433 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:01:53.102444 | orchestrator | 2026-03-18 03:01:53.102454 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-18 03:01:53.102465 | orchestrator | Wednesday 18 March 2026 03:01:50 +0000 (0:00:01.795) 0:04:09.591 ******* 2026-03-18 03:01:53.102487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:53.102507 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:53.102524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:53.102556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.527688 | orchestrator | 2026-03-18 03:01:58.527705 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-18 03:01:58.527721 | orchestrator | Wednesday 18 March 2026 03:01:53 +0000 (0:00:02.590) 0:04:12.181 ******* 2026-03-18 03:01:58.527735 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:01:58.527750 | orchestrator | 2026-03-18 03:01:58.527764 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-18 03:01:58.527778 | orchestrator | Wednesday 18 March 2026 03:01:55 +0000 (0:00:01.542) 0:04:13.724 ******* 2026-03-18 03:01:58.527807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:58.977870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:58.977973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:01:58.977989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978221 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:01:58.978296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:00.556100 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:00.556200 | orchestrator | 2026-03-18 03:02:00.556219 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-18 03:02:00.556235 | orchestrator | Wednesday 18 March 2026 03:01:58 +0000 (0:00:03.862) 0:04:17.586 ******* 2026-03-18 03:02:00.556252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:00.556303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:00.556319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:00.556332 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:00.556369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:00.556393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:00.556409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:00.556434 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:00.556449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:00.556465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:00.556480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:00.556496 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:00.556529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:02.362689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:02.362815 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:02.362832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:02.362840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:02.362848 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:02.362855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:02.362862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:02.362868 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:02.362875 | orchestrator | 2026-03-18 03:02:02.362883 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-18 03:02:02.362891 | orchestrator | Wednesday 18 March 2026 03:02:00 +0000 (0:00:01.765) 0:04:19.352 ******* 2026-03-18 03:02:02.362929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:02.362938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:02.362952 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:02.362961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:02.362967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:02.362973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:02.362977 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:02.362981 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:02.363036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:15.539829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:15.540038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:02:15.540072 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:15.540095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:15.540114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:15.540132 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:15.540149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:15.540215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:15.540235 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:15.540276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:02:15.540295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:02:15.540312 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:15.540330 | orchestrator | 2026-03-18 03:02:15.540350 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-18 03:02:15.540370 | orchestrator | Wednesday 18 March 2026 03:02:03 +0000 (0:00:02.697) 0:04:22.049 ******* 2026-03-18 03:02:15.540388 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:15.540406 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:15.540424 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:15.540442 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:02:15.540460 | orchestrator | 2026-03-18 03:02:15.540478 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-18 03:02:15.540496 | orchestrator | Wednesday 18 March 2026 03:02:04 +0000 (0:00:01.276) 0:04:23.326 ******* 2026-03-18 03:02:15.540514 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:02:15.540532 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:02:15.540549 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:02:15.540567 | orchestrator | 2026-03-18 03:02:15.540581 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-18 03:02:15.540596 | orchestrator | Wednesday 18 March 2026 03:02:05 +0000 (0:00:01.272) 0:04:24.598 ******* 2026-03-18 03:02:15.540611 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:02:15.540626 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:02:15.540641 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:02:15.540655 | orchestrator | 2026-03-18 03:02:15.540670 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-18 03:02:15.540684 | orchestrator | Wednesday 18 March 2026 03:02:07 +0000 (0:00:01.037) 0:04:25.635 ******* 2026-03-18 03:02:15.540698 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:02:15.540715 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:02:15.540730 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:02:15.540744 | orchestrator | 2026-03-18 03:02:15.540757 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-18 03:02:15.540783 | orchestrator | Wednesday 18 March 2026 03:02:07 +0000 (0:00:00.635) 0:04:26.271 ******* 2026-03-18 03:02:15.540797 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:02:15.540811 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:02:15.540824 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:02:15.540838 | orchestrator | 2026-03-18 03:02:15.540851 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-18 03:02:15.540863 | orchestrator | Wednesday 18 March 2026 03:02:08 +0000 (0:00:00.678) 0:04:26.949 ******* 2026-03-18 03:02:15.540876 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-18 03:02:15.540890 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-18 03:02:15.540902 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-18 03:02:15.540916 | orchestrator | 2026-03-18 03:02:15.540930 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-18 03:02:15.540943 | orchestrator | Wednesday 18 March 2026 03:02:09 +0000 (0:00:01.412) 0:04:28.361 ******* 2026-03-18 03:02:15.540957 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-18 03:02:15.540996 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-18 03:02:15.541010 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-18 03:02:15.541024 | orchestrator | 2026-03-18 03:02:15.541046 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-18 03:02:15.541060 | orchestrator | Wednesday 18 March 2026 03:02:10 +0000 (0:00:01.237) 0:04:29.598 ******* 2026-03-18 03:02:15.541074 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-18 03:02:15.541087 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-18 03:02:15.541101 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-18 03:02:15.541115 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-18 03:02:15.541129 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-18 03:02:15.541142 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-18 03:02:15.541156 | orchestrator | 2026-03-18 03:02:15.541169 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-18 03:02:15.541182 | orchestrator | Wednesday 18 March 2026 03:02:15 +0000 (0:00:04.222) 0:04:33.821 ******* 2026-03-18 03:02:15.541197 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:15.541211 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:15.541223 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:15.541236 | orchestrator | 2026-03-18 03:02:15.541260 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-18 03:02:30.425876 | orchestrator | Wednesday 18 March 2026 03:02:15 +0000 (0:00:00.323) 0:04:34.144 ******* 2026-03-18 03:02:30.426077 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:30.426097 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:30.426103 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:30.426109 | orchestrator | 2026-03-18 03:02:30.426115 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-18 03:02:30.426122 | orchestrator | Wednesday 18 March 2026 03:02:16 +0000 (0:00:00.559) 0:04:34.703 ******* 2026-03-18 03:02:30.426128 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:02:30.426134 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:02:30.426139 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:02:30.426145 | orchestrator | 2026-03-18 03:02:30.426151 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-18 03:02:30.426156 | orchestrator | Wednesday 18 March 2026 03:02:17 +0000 (0:00:01.286) 0:04:35.990 ******* 2026-03-18 03:02:30.426162 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-18 03:02:30.426169 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-18 03:02:30.426193 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-18 03:02:30.426199 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-18 03:02:30.426205 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-18 03:02:30.426210 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-18 03:02:30.426215 | orchestrator | 2026-03-18 03:02:30.426221 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-18 03:02:30.426226 | orchestrator | Wednesday 18 March 2026 03:02:20 +0000 (0:00:03.477) 0:04:39.467 ******* 2026-03-18 03:02:30.426232 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 03:02:30.426239 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 03:02:30.426248 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 03:02:30.426257 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-18 03:02:30.426265 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:02:30.426274 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-18 03:02:30.426336 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:02:30.426349 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-18 03:02:30.426359 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:02:30.426368 | orchestrator | 2026-03-18 03:02:30.426378 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-18 03:02:30.426387 | orchestrator | Wednesday 18 March 2026 03:02:24 +0000 (0:00:03.548) 0:04:43.016 ******* 2026-03-18 03:02:30.426394 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:30.426400 | orchestrator | 2026-03-18 03:02:30.426406 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-18 03:02:30.426422 | orchestrator | Wednesday 18 March 2026 03:02:24 +0000 (0:00:00.139) 0:04:43.156 ******* 2026-03-18 03:02:30.426429 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:30.426436 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:30.426443 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:30.426449 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:30.426455 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:30.426462 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:30.426468 | orchestrator | 2026-03-18 03:02:30.426474 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-18 03:02:30.426480 | orchestrator | Wednesday 18 March 2026 03:02:25 +0000 (0:00:00.878) 0:04:44.035 ******* 2026-03-18 03:02:30.426486 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:02:30.426493 | orchestrator | 2026-03-18 03:02:30.426499 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-18 03:02:30.426505 | orchestrator | Wednesday 18 March 2026 03:02:26 +0000 (0:00:00.733) 0:04:44.769 ******* 2026-03-18 03:02:30.426511 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:30.426517 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:30.426523 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:30.426529 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:30.426535 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:30.426543 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:30.426553 | orchestrator | 2026-03-18 03:02:30.426577 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-18 03:02:30.426587 | orchestrator | Wednesday 18 March 2026 03:02:27 +0000 (0:00:00.908) 0:04:45.677 ******* 2026-03-18 03:02:30.426622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:02:30.426722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597556 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:35.597568 | orchestrator | 2026-03-18 03:02:35.597580 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-18 03:02:35.597593 | orchestrator | Wednesday 18 March 2026 03:02:30 +0000 (0:00:03.884) 0:04:49.562 ******* 2026-03-18 03:02:35.597604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:35.597616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:35.597643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:35.597665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:43.528712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:02:43.528835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:02:43.528851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.528878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.528956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:43.528987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:43.528997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.529005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:02:43.529014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.529034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.529043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:02:43.529052 | orchestrator | 2026-03-18 03:02:43.529062 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-18 03:02:43.529072 | orchestrator | Wednesday 18 March 2026 03:02:37 +0000 (0:00:06.923) 0:04:56.485 ******* 2026-03-18 03:02:43.529080 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:02:43.529089 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:02:43.529097 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:02:43.529105 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:02:43.529113 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:02:43.529121 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:02:43.529129 | orchestrator | 2026-03-18 03:02:43.529138 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-18 03:02:43.529146 | orchestrator | Wednesday 18 March 2026 03:02:39 +0000 (0:00:01.519) 0:04:58.004 ******* 2026-03-18 03:02:43.529154 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-18 03:02:43.529168 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-18 03:03:01.861232 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-18 03:03:01.861368 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-18 03:03:01.861388 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-18 03:03:01.861443 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-18 03:03:01.861451 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-18 03:03:01.861458 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-18 03:03:01.861469 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861474 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-18 03:03:01.861478 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861483 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-18 03:03:01.861487 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-18 03:03:01.861491 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-18 03:03:01.861495 | orchestrator | 2026-03-18 03:03:01.861500 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-18 03:03:01.861504 | orchestrator | Wednesday 18 March 2026 03:02:43 +0000 (0:00:04.125) 0:05:02.129 ******* 2026-03-18 03:03:01.861530 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:03:01.861535 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:03:01.861538 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:03:01.861542 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861546 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861550 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861553 | orchestrator | 2026-03-18 03:03:01.861557 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-18 03:03:01.861561 | orchestrator | Wednesday 18 March 2026 03:02:44 +0000 (0:00:00.686) 0:05:02.815 ******* 2026-03-18 03:03:01.861565 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-18 03:03:01.861570 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-18 03:03:01.861573 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-18 03:03:01.861577 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-18 03:03:01.861581 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-18 03:03:01.861585 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861593 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-18 03:03:01.861611 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861615 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861619 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861627 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861630 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-18 03:03:01.861634 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861638 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861642 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861646 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861650 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861653 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861657 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-18 03:03:01.861661 | orchestrator | 2026-03-18 03:03:01.861665 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-18 03:03:01.861669 | orchestrator | Wednesday 18 March 2026 03:02:49 +0000 (0:00:05.721) 0:05:08.536 ******* 2026-03-18 03:03:01.861686 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 03:03:01.861690 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 03:03:01.861698 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 03:03:01.861702 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 03:03:01.861706 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 03:03:01.861710 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-18 03:03:01.861714 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-18 03:03:01.861718 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-18 03:03:01.861722 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-18 03:03:01.861725 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 03:03:01.861729 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 03:03:01.861733 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 03:03:01.861737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-18 03:03:01.861740 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861744 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 03:03:01.861748 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-18 03:03:01.861752 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861756 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 03:03:01.861759 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-18 03:03:01.861764 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-18 03:03:01.861768 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861771 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 03:03:01.861775 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 03:03:01.861779 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-18 03:03:01.861783 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 03:03:01.861787 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 03:03:01.861791 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-18 03:03:01.861794 | orchestrator | 2026-03-18 03:03:01.861798 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-18 03:03:01.861802 | orchestrator | Wednesday 18 March 2026 03:02:56 +0000 (0:00:06.967) 0:05:15.504 ******* 2026-03-18 03:03:01.861806 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:03:01.861811 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:03:01.861818 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:03:01.861827 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861834 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861840 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861846 | orchestrator | 2026-03-18 03:03:01.861853 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-18 03:03:01.861859 | orchestrator | Wednesday 18 March 2026 03:02:57 +0000 (0:00:00.921) 0:05:16.425 ******* 2026-03-18 03:03:01.861865 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:03:01.861893 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:03:01.861899 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:03:01.861905 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861910 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861922 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861928 | orchestrator | 2026-03-18 03:03:01.861935 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-18 03:03:01.861940 | orchestrator | Wednesday 18 March 2026 03:02:58 +0000 (0:00:00.673) 0:05:17.099 ******* 2026-03-18 03:03:01.861946 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:01.861953 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:03:01.861958 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:01.861964 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:01.861970 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:03:01.861976 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:03:01.861982 | orchestrator | 2026-03-18 03:03:01.861989 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-18 03:03:01.861995 | orchestrator | Wednesday 18 March 2026 03:03:00 +0000 (0:00:02.114) 0:05:19.213 ******* 2026-03-18 03:03:01.862060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:03:02.145077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:03:02.145235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:03:02.145256 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:03:02.145295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:03:02.145384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:03:02.145398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:03:02.145410 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:03:02.145450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-18 03:03:02.145463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-18 03:03:02.145475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-18 03:03:02.145494 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:03:02.145513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:03:02.145526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:03:02.145537 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:02.145548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:03:02.145570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:03:05.575373 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:05.575518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-18 03:03:05.575550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:03:05.575572 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:05.575626 | orchestrator | 2026-03-18 03:03:05.575649 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-18 03:03:05.575670 | orchestrator | Wednesday 18 March 2026 03:03:02 +0000 (0:00:01.537) 0:05:20.751 ******* 2026-03-18 03:03:05.575688 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-18 03:03:05.575700 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575711 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:03:05.575722 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-18 03:03:05.575733 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575744 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:03:05.575772 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-18 03:03:05.575784 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575795 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:03:05.575805 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-18 03:03:05.575816 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575827 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:03:05.575837 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-18 03:03:05.575848 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575862 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:03:05.575906 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-18 03:03:05.575919 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-18 03:03:05.575932 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:03:05.575945 | orchestrator | 2026-03-18 03:03:05.575958 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-18 03:03:05.575971 | orchestrator | Wednesday 18 March 2026 03:03:03 +0000 (0:00:00.990) 0:05:21.742 ******* 2026-03-18 03:03:05.575985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576023 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-18 03:03:05.576136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-18 03:05:13.593494 | orchestrator | 2026-03-18 03:05:13.593506 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-18 03:05:13.593517 | orchestrator | Wednesday 18 March 2026 03:03:05 +0000 (0:00:02.666) 0:05:24.408 ******* 2026-03-18 03:05:13.593527 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:05:13.593539 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:05:13.593548 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:05:13.593558 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:05:13.593568 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:05:13.593577 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:05:13.593587 | orchestrator | 2026-03-18 03:05:13.593597 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593606 | orchestrator | Wednesday 18 March 2026 03:03:06 +0000 (0:00:00.904) 0:05:25.312 ******* 2026-03-18 03:05:13.593616 | orchestrator | 2026-03-18 03:05:13.593626 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593635 | orchestrator | Wednesday 18 March 2026 03:03:06 +0000 (0:00:00.157) 0:05:25.469 ******* 2026-03-18 03:05:13.593713 | orchestrator | 2026-03-18 03:05:13.593723 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593733 | orchestrator | Wednesday 18 March 2026 03:03:07 +0000 (0:00:00.149) 0:05:25.619 ******* 2026-03-18 03:05:13.593743 | orchestrator | 2026-03-18 03:05:13.593753 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593763 | orchestrator | Wednesday 18 March 2026 03:03:07 +0000 (0:00:00.172) 0:05:25.791 ******* 2026-03-18 03:05:13.593772 | orchestrator | 2026-03-18 03:05:13.593787 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593797 | orchestrator | Wednesday 18 March 2026 03:03:07 +0000 (0:00:00.156) 0:05:25.948 ******* 2026-03-18 03:05:13.593806 | orchestrator | 2026-03-18 03:05:13.593816 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-18 03:05:13.593825 | orchestrator | Wednesday 18 March 2026 03:03:07 +0000 (0:00:00.328) 0:05:26.276 ******* 2026-03-18 03:05:13.593835 | orchestrator | 2026-03-18 03:05:13.593844 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-18 03:05:13.593854 | orchestrator | Wednesday 18 March 2026 03:03:07 +0000 (0:00:00.160) 0:05:26.437 ******* 2026-03-18 03:05:13.593864 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:05:13.593873 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:05:13.593883 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:05:13.593892 | orchestrator | 2026-03-18 03:05:13.593902 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-18 03:05:13.593911 | orchestrator | Wednesday 18 March 2026 03:03:20 +0000 (0:00:12.598) 0:05:39.036 ******* 2026-03-18 03:05:13.593921 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:05:13.593930 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:05:13.593940 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:05:13.593949 | orchestrator | 2026-03-18 03:05:13.593959 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-18 03:05:13.593969 | orchestrator | Wednesday 18 March 2026 03:03:34 +0000 (0:00:14.088) 0:05:53.125 ******* 2026-03-18 03:05:13.593978 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:05:13.593988 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:05:13.594004 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:05:13.594014 | orchestrator | 2026-03-18 03:05:13.594088 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-18 03:05:13.594098 | orchestrator | Wednesday 18 March 2026 03:04:00 +0000 (0:00:25.659) 0:06:18.784 ******* 2026-03-18 03:05:13.594107 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:05:13.594117 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:05:13.594126 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:05:13.594136 | orchestrator | 2026-03-18 03:05:13.594145 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-18 03:05:13.594155 | orchestrator | Wednesday 18 March 2026 03:04:45 +0000 (0:00:45.204) 0:07:03.989 ******* 2026-03-18 03:05:13.594164 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:05:13.594174 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:05:13.594183 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:05:13.594193 | orchestrator | 2026-03-18 03:05:13.594202 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-18 03:05:13.594212 | orchestrator | Wednesday 18 March 2026 03:04:46 +0000 (0:00:00.810) 0:07:04.800 ******* 2026-03-18 03:05:13.594221 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:05:13.594231 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:05:13.594240 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:05:13.594249 | orchestrator | 2026-03-18 03:05:13.594259 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-18 03:05:13.594268 | orchestrator | Wednesday 18 March 2026 03:04:47 +0000 (0:00:00.827) 0:07:05.628 ******* 2026-03-18 03:05:13.594278 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:05:13.594287 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:05:13.594296 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:05:13.594306 | orchestrator | 2026-03-18 03:05:13.594316 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-18 03:05:13.594335 | orchestrator | Wednesday 18 March 2026 03:05:13 +0000 (0:00:26.559) 0:07:32.187 ******* 2026-03-18 03:06:25.025015 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:06:25.025166 | orchestrator | 2026-03-18 03:06:25.025196 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-18 03:06:25.025218 | orchestrator | Wednesday 18 March 2026 03:05:13 +0000 (0:00:00.141) 0:07:32.329 ******* 2026-03-18 03:06:25.025238 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:06:25.025257 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:06:25.025275 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.025295 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.025313 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.025334 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-18 03:06:25.025348 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 03:06:25.025360 | orchestrator | 2026-03-18 03:06:25.025371 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-18 03:06:25.025382 | orchestrator | Wednesday 18 March 2026 03:05:36 +0000 (0:00:22.486) 0:07:54.815 ******* 2026-03-18 03:06:25.025393 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:06:25.025404 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.025414 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.025425 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:06:25.025436 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:06:25.025446 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.025457 | orchestrator | 2026-03-18 03:06:25.025468 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-18 03:06:25.025478 | orchestrator | Wednesday 18 March 2026 03:05:46 +0000 (0:00:10.305) 0:08:05.121 ******* 2026-03-18 03:06:25.025489 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:06:25.025500 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.025510 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:06:25.025580 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.025595 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.025609 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-18 03:06:25.025623 | orchestrator | 2026-03-18 03:06:25.025637 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-18 03:06:25.025650 | orchestrator | Wednesday 18 March 2026 03:05:50 +0000 (0:00:03.896) 0:08:09.017 ******* 2026-03-18 03:06:25.025663 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 03:06:25.025676 | orchestrator | 2026-03-18 03:06:25.025710 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-18 03:06:25.025730 | orchestrator | Wednesday 18 March 2026 03:06:03 +0000 (0:00:12.804) 0:08:21.821 ******* 2026-03-18 03:06:25.025758 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 03:06:25.025782 | orchestrator | 2026-03-18 03:06:25.025798 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-18 03:06:25.025815 | orchestrator | Wednesday 18 March 2026 03:06:04 +0000 (0:00:01.660) 0:08:23.482 ******* 2026-03-18 03:06:25.025832 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:06:25.025849 | orchestrator | 2026-03-18 03:06:25.025866 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-18 03:06:25.025884 | orchestrator | Wednesday 18 March 2026 03:06:06 +0000 (0:00:01.803) 0:08:25.286 ******* 2026-03-18 03:06:25.025903 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 03:06:25.025923 | orchestrator | 2026-03-18 03:06:25.025942 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-18 03:06:25.025960 | orchestrator | Wednesday 18 March 2026 03:06:17 +0000 (0:00:11.198) 0:08:36.484 ******* 2026-03-18 03:06:25.025977 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:06:25.025989 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:06:25.026000 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:06:25.026011 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:25.026089 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:25.026101 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:25.026112 | orchestrator | 2026-03-18 03:06:25.026123 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-18 03:06:25.026134 | orchestrator | 2026-03-18 03:06:25.026144 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-18 03:06:25.026155 | orchestrator | Wednesday 18 March 2026 03:06:19 +0000 (0:00:01.864) 0:08:38.348 ******* 2026-03-18 03:06:25.026167 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:06:25.026177 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:06:25.026188 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:06:25.026199 | orchestrator | 2026-03-18 03:06:25.026210 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-18 03:06:25.026220 | orchestrator | 2026-03-18 03:06:25.026231 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-18 03:06:25.026242 | orchestrator | Wednesday 18 March 2026 03:06:20 +0000 (0:00:00.930) 0:08:39.279 ******* 2026-03-18 03:06:25.026252 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.026263 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.026274 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.026285 | orchestrator | 2026-03-18 03:06:25.026295 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-18 03:06:25.026306 | orchestrator | 2026-03-18 03:06:25.026317 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-18 03:06:25.026327 | orchestrator | Wednesday 18 March 2026 03:06:21 +0000 (0:00:00.792) 0:08:40.071 ******* 2026-03-18 03:06:25.026338 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-18 03:06:25.026349 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-18 03:06:25.026360 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026385 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-18 03:06:25.026396 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-18 03:06:25.026407 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.026418 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:06:25.026452 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-18 03:06:25.026464 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-18 03:06:25.026475 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026486 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-18 03:06:25.026497 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-18 03:06:25.026508 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.026519 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:06:25.026570 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-18 03:06:25.026589 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-18 03:06:25.026600 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026612 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-18 03:06:25.026630 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-18 03:06:25.026647 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.026662 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:06:25.026679 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-18 03:06:25.026695 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-18 03:06:25.026713 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026732 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-18 03:06:25.026751 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-18 03:06:25.026773 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.026800 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.026820 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-18 03:06:25.026837 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-18 03:06:25.026857 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026876 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-18 03:06:25.026895 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-18 03:06:25.026909 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.026929 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.026941 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-18 03:06:25.026951 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-18 03:06:25.026962 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-18 03:06:25.026973 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-18 03:06:25.026984 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-18 03:06:25.026995 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-18 03:06:25.027006 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.027017 | orchestrator | 2026-03-18 03:06:25.027028 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-18 03:06:25.027039 | orchestrator | 2026-03-18 03:06:25.027049 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-18 03:06:25.027060 | orchestrator | Wednesday 18 March 2026 03:06:22 +0000 (0:00:01.477) 0:08:41.549 ******* 2026-03-18 03:06:25.027071 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-18 03:06:25.027082 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-18 03:06:25.027104 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.027114 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-18 03:06:25.027125 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-18 03:06:25.027136 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.027147 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-18 03:06:25.027157 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-18 03:06:25.027168 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.027179 | orchestrator | 2026-03-18 03:06:25.027189 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-18 03:06:25.027200 | orchestrator | 2026-03-18 03:06:25.027211 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-18 03:06:25.027222 | orchestrator | Wednesday 18 March 2026 03:06:23 +0000 (0:00:00.615) 0:08:42.164 ******* 2026-03-18 03:06:25.027232 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.027243 | orchestrator | 2026-03-18 03:06:25.027254 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-18 03:06:25.027265 | orchestrator | 2026-03-18 03:06:25.027275 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-18 03:06:25.027286 | orchestrator | Wednesday 18 March 2026 03:06:24 +0000 (0:00:00.976) 0:08:43.141 ******* 2026-03-18 03:06:25.027297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:25.027308 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:25.027318 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:25.027329 | orchestrator | 2026-03-18 03:06:25.027340 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:06:25.027351 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:06:25.027364 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-18 03:06:25.027376 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-18 03:06:25.027398 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-18 03:06:25.497290 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-18 03:06:25.497375 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-18 03:06:25.497385 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-18 03:06:25.497391 | orchestrator | 2026-03-18 03:06:25.497398 | orchestrator | 2026-03-18 03:06:25.497405 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:06:25.497413 | orchestrator | Wednesday 18 March 2026 03:06:25 +0000 (0:00:00.483) 0:08:43.624 ******* 2026-03-18 03:06:25.497420 | orchestrator | =============================================================================== 2026-03-18 03:06:25.497426 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 45.21s 2026-03-18 03:06:25.497433 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.48s 2026-03-18 03:06:25.497439 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.56s 2026-03-18 03:06:25.497446 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.66s 2026-03-18 03:06:25.497452 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.49s 2026-03-18 03:06:25.497457 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.83s 2026-03-18 03:06:25.497481 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.63s 2026-03-18 03:06:25.497486 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.31s 2026-03-18 03:06:25.497490 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.09s 2026-03-18 03:06:25.497494 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.96s 2026-03-18 03:06:25.497509 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.80s 2026-03-18 03:06:25.497513 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.60s 2026-03-18 03:06:25.497517 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.19s 2026-03-18 03:06:25.497520 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.07s 2026-03-18 03:06:25.497524 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.80s 2026-03-18 03:06:25.497567 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.20s 2026-03-18 03:06:25.497573 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.31s 2026-03-18 03:06:25.497577 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.12s 2026-03-18 03:06:25.497581 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.85s 2026-03-18 03:06:25.497653 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.46s 2026-03-18 03:06:28.055319 | orchestrator | 2026-03-18 03:06:28 | INFO  | Task 675cf0cd-6d93-4be0-b5dd-75306182523c (horizon) was prepared for execution. 2026-03-18 03:06:28.055400 | orchestrator | 2026-03-18 03:06:28 | INFO  | It takes a moment until task 675cf0cd-6d93-4be0-b5dd-75306182523c (horizon) has been started and output is visible here. 2026-03-18 03:06:35.825360 | orchestrator | 2026-03-18 03:06:35.825460 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:06:35.825469 | orchestrator | 2026-03-18 03:06:35.825476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:06:35.825483 | orchestrator | Wednesday 18 March 2026 03:06:32 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-03-18 03:06:35.825489 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:35.825498 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:35.825504 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:35.825510 | orchestrator | 2026-03-18 03:06:35.825574 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:06:35.825579 | orchestrator | Wednesday 18 March 2026 03:06:32 +0000 (0:00:00.366) 0:00:00.640 ******* 2026-03-18 03:06:35.825583 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-18 03:06:35.825589 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-18 03:06:35.825593 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-18 03:06:35.825597 | orchestrator | 2026-03-18 03:06:35.825601 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-18 03:06:35.825604 | orchestrator | 2026-03-18 03:06:35.825609 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-18 03:06:35.825613 | orchestrator | Wednesday 18 March 2026 03:06:33 +0000 (0:00:00.458) 0:00:01.099 ******* 2026-03-18 03:06:35.825617 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:06:35.825622 | orchestrator | 2026-03-18 03:06:35.825626 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-18 03:06:35.825630 | orchestrator | Wednesday 18 March 2026 03:06:33 +0000 (0:00:00.539) 0:00:01.638 ******* 2026-03-18 03:06:35.825652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:35.825687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:35.825700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:35.825705 | orchestrator | 2026-03-18 03:06:35.825709 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-18 03:06:35.825713 | orchestrator | Wednesday 18 March 2026 03:06:35 +0000 (0:00:01.200) 0:00:02.839 ******* 2026-03-18 03:06:35.825717 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:35.825721 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:35.825724 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:35.825728 | orchestrator | 2026-03-18 03:06:35.825732 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-18 03:06:35.825736 | orchestrator | Wednesday 18 March 2026 03:06:35 +0000 (0:00:00.526) 0:00:03.365 ******* 2026-03-18 03:06:35.825742 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-18 03:06:42.374571 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-18 03:06:42.374682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-18 03:06:42.374699 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-18 03:06:42.374711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-18 03:06:42.374722 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-18 03:06:42.374733 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-18 03:06:42.374744 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-18 03:06:42.374755 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-18 03:06:42.374766 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-18 03:06:42.374776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-18 03:06:42.374810 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-18 03:06:42.374821 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-18 03:06:42.374832 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-18 03:06:42.374843 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-18 03:06:42.374854 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-18 03:06:42.374864 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-18 03:06:42.374875 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-18 03:06:42.374885 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-18 03:06:42.374896 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-18 03:06:42.374906 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-18 03:06:42.374917 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-18 03:06:42.374928 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-18 03:06:42.374939 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-18 03:06:42.374951 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-18 03:06:42.374963 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-18 03:06:42.374974 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-18 03:06:42.374985 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-18 03:06:42.374995 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-18 03:06:42.375021 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-18 03:06:42.375032 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-18 03:06:42.375042 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-18 03:06:42.375053 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-18 03:06:42.375065 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-18 03:06:42.375077 | orchestrator | 2026-03-18 03:06:42.375089 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.375100 | orchestrator | Wednesday 18 March 2026 03:06:36 +0000 (0:00:00.796) 0:00:04.162 ******* 2026-03-18 03:06:42.375112 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.375123 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.375134 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.375144 | orchestrator | 2026-03-18 03:06:42.375155 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.375174 | orchestrator | Wednesday 18 March 2026 03:06:36 +0000 (0:00:00.322) 0:00:04.484 ******* 2026-03-18 03:06:42.375185 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375197 | orchestrator | 2026-03-18 03:06:42.375225 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.375238 | orchestrator | Wednesday 18 March 2026 03:06:37 +0000 (0:00:00.348) 0:00:04.833 ******* 2026-03-18 03:06:42.375248 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375259 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.375270 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.375281 | orchestrator | 2026-03-18 03:06:42.375291 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.375302 | orchestrator | Wednesday 18 March 2026 03:06:37 +0000 (0:00:00.342) 0:00:05.176 ******* 2026-03-18 03:06:42.375313 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.375323 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.375334 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.375344 | orchestrator | 2026-03-18 03:06:42.375355 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.375366 | orchestrator | Wednesday 18 March 2026 03:06:37 +0000 (0:00:00.360) 0:00:05.536 ******* 2026-03-18 03:06:42.375376 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375387 | orchestrator | 2026-03-18 03:06:42.375398 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.375408 | orchestrator | Wednesday 18 March 2026 03:06:37 +0000 (0:00:00.126) 0:00:05.663 ******* 2026-03-18 03:06:42.375419 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375430 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.375441 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.375452 | orchestrator | 2026-03-18 03:06:42.375463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.375473 | orchestrator | Wednesday 18 March 2026 03:06:38 +0000 (0:00:00.310) 0:00:05.973 ******* 2026-03-18 03:06:42.375484 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.375495 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.375529 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.375543 | orchestrator | 2026-03-18 03:06:42.375553 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.375564 | orchestrator | Wednesday 18 March 2026 03:06:38 +0000 (0:00:00.589) 0:00:06.563 ******* 2026-03-18 03:06:42.375575 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375585 | orchestrator | 2026-03-18 03:06:42.375596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.375607 | orchestrator | Wednesday 18 March 2026 03:06:39 +0000 (0:00:00.160) 0:00:06.723 ******* 2026-03-18 03:06:42.375618 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375628 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.375639 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.375650 | orchestrator | 2026-03-18 03:06:42.375661 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.375672 | orchestrator | Wednesday 18 March 2026 03:06:39 +0000 (0:00:00.326) 0:00:07.050 ******* 2026-03-18 03:06:42.375682 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.375693 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.375704 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.375714 | orchestrator | 2026-03-18 03:06:42.375725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.375736 | orchestrator | Wednesday 18 March 2026 03:06:39 +0000 (0:00:00.371) 0:00:07.422 ******* 2026-03-18 03:06:42.375749 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375768 | orchestrator | 2026-03-18 03:06:42.375786 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.375802 | orchestrator | Wednesday 18 March 2026 03:06:39 +0000 (0:00:00.144) 0:00:07.566 ******* 2026-03-18 03:06:42.375817 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.375848 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.375874 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.375892 | orchestrator | 2026-03-18 03:06:42.375909 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.375924 | orchestrator | Wednesday 18 March 2026 03:06:40 +0000 (0:00:00.579) 0:00:08.146 ******* 2026-03-18 03:06:42.375942 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.375958 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.375976 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.375994 | orchestrator | 2026-03-18 03:06:42.376011 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.376029 | orchestrator | Wednesday 18 March 2026 03:06:40 +0000 (0:00:00.344) 0:00:08.490 ******* 2026-03-18 03:06:42.376048 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.376065 | orchestrator | 2026-03-18 03:06:42.376095 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.376115 | orchestrator | Wednesday 18 March 2026 03:06:40 +0000 (0:00:00.144) 0:00:08.635 ******* 2026-03-18 03:06:42.376130 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.376141 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.376151 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.376162 | orchestrator | 2026-03-18 03:06:42.376173 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.376184 | orchestrator | Wednesday 18 March 2026 03:06:41 +0000 (0:00:00.334) 0:00:08.970 ******* 2026-03-18 03:06:42.376201 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:42.376219 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:42.376237 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:42.376254 | orchestrator | 2026-03-18 03:06:42.376272 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:42.376288 | orchestrator | Wednesday 18 March 2026 03:06:41 +0000 (0:00:00.380) 0:00:09.350 ******* 2026-03-18 03:06:42.376305 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.376323 | orchestrator | 2026-03-18 03:06:42.376342 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:42.376361 | orchestrator | Wednesday 18 March 2026 03:06:42 +0000 (0:00:00.339) 0:00:09.690 ******* 2026-03-18 03:06:42.376381 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:42.376399 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:42.376419 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:42.376431 | orchestrator | 2026-03-18 03:06:42.376442 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:42.376465 | orchestrator | Wednesday 18 March 2026 03:06:42 +0000 (0:00:00.342) 0:00:10.033 ******* 2026-03-18 03:06:56.894077 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:56.894179 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:56.894194 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:56.894204 | orchestrator | 2026-03-18 03:06:56.894215 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:56.894227 | orchestrator | Wednesday 18 March 2026 03:06:42 +0000 (0:00:00.349) 0:00:10.382 ******* 2026-03-18 03:06:56.894236 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894247 | orchestrator | 2026-03-18 03:06:56.894257 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:56.894267 | orchestrator | Wednesday 18 March 2026 03:06:42 +0000 (0:00:00.159) 0:00:10.541 ******* 2026-03-18 03:06:56.894277 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894286 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.894296 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.894306 | orchestrator | 2026-03-18 03:06:56.894315 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:56.894325 | orchestrator | Wednesday 18 March 2026 03:06:43 +0000 (0:00:00.308) 0:00:10.850 ******* 2026-03-18 03:06:56.894335 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:56.894345 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:56.894377 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:56.894387 | orchestrator | 2026-03-18 03:06:56.894397 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:56.894407 | orchestrator | Wednesday 18 March 2026 03:06:43 +0000 (0:00:00.580) 0:00:11.430 ******* 2026-03-18 03:06:56.894417 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894426 | orchestrator | 2026-03-18 03:06:56.894436 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:56.894445 | orchestrator | Wednesday 18 March 2026 03:06:43 +0000 (0:00:00.152) 0:00:11.583 ******* 2026-03-18 03:06:56.894455 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894465 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.894474 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.894509 | orchestrator | 2026-03-18 03:06:56.894521 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:56.894531 | orchestrator | Wednesday 18 March 2026 03:06:44 +0000 (0:00:00.361) 0:00:11.945 ******* 2026-03-18 03:06:56.894540 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:56.894550 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:56.894559 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:56.894571 | orchestrator | 2026-03-18 03:06:56.894582 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:56.894593 | orchestrator | Wednesday 18 March 2026 03:06:44 +0000 (0:00:00.365) 0:00:12.310 ******* 2026-03-18 03:06:56.894605 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894616 | orchestrator | 2026-03-18 03:06:56.894627 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:56.894638 | orchestrator | Wednesday 18 March 2026 03:06:44 +0000 (0:00:00.133) 0:00:12.443 ******* 2026-03-18 03:06:56.894649 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894660 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.894672 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.894683 | orchestrator | 2026-03-18 03:06:56.894694 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-18 03:06:56.894705 | orchestrator | Wednesday 18 March 2026 03:06:45 +0000 (0:00:00.546) 0:00:12.990 ******* 2026-03-18 03:06:56.894716 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:06:56.894727 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:06:56.894739 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:06:56.894749 | orchestrator | 2026-03-18 03:06:56.894761 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-18 03:06:56.894772 | orchestrator | Wednesday 18 March 2026 03:06:45 +0000 (0:00:00.345) 0:00:13.336 ******* 2026-03-18 03:06:56.894781 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894790 | orchestrator | 2026-03-18 03:06:56.894800 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-18 03:06:56.894809 | orchestrator | Wednesday 18 March 2026 03:06:45 +0000 (0:00:00.145) 0:00:13.481 ******* 2026-03-18 03:06:56.894819 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.894828 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.894838 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.894847 | orchestrator | 2026-03-18 03:06:56.894857 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-18 03:06:56.894880 | orchestrator | Wednesday 18 March 2026 03:06:46 +0000 (0:00:00.323) 0:00:13.805 ******* 2026-03-18 03:06:56.894890 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:06:56.894899 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:06:56.894908 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:06:56.894918 | orchestrator | 2026-03-18 03:06:56.894927 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-18 03:06:56.894937 | orchestrator | Wednesday 18 March 2026 03:06:47 +0000 (0:00:01.849) 0:00:15.654 ******* 2026-03-18 03:06:56.894947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-18 03:06:56.894957 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-18 03:06:56.894975 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-18 03:06:56.894984 | orchestrator | 2026-03-18 03:06:56.894994 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-18 03:06:56.895003 | orchestrator | Wednesday 18 March 2026 03:06:49 +0000 (0:00:01.961) 0:00:17.616 ******* 2026-03-18 03:06:56.895013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-18 03:06:56.895023 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-18 03:06:56.895033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-18 03:06:56.895042 | orchestrator | 2026-03-18 03:06:56.895052 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-18 03:06:56.895080 | orchestrator | Wednesday 18 March 2026 03:06:51 +0000 (0:00:01.831) 0:00:19.448 ******* 2026-03-18 03:06:56.895091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-18 03:06:56.895100 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-18 03:06:56.895110 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-18 03:06:56.895120 | orchestrator | 2026-03-18 03:06:56.895129 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-18 03:06:56.895139 | orchestrator | Wednesday 18 March 2026 03:06:53 +0000 (0:00:01.586) 0:00:21.035 ******* 2026-03-18 03:06:56.895148 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.895158 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.895168 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.895181 | orchestrator | 2026-03-18 03:06:56.895198 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-18 03:06:56.895213 | orchestrator | Wednesday 18 March 2026 03:06:53 +0000 (0:00:00.550) 0:00:21.586 ******* 2026-03-18 03:06:56.895227 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:56.895242 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:56.895258 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:06:56.895273 | orchestrator | 2026-03-18 03:06:56.895288 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-18 03:06:56.895304 | orchestrator | Wednesday 18 March 2026 03:06:54 +0000 (0:00:00.362) 0:00:21.948 ******* 2026-03-18 03:06:56.895320 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:06:56.895336 | orchestrator | 2026-03-18 03:06:56.895352 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-18 03:06:56.895367 | orchestrator | Wednesday 18 March 2026 03:06:54 +0000 (0:00:00.625) 0:00:22.574 ******* 2026-03-18 03:06:56.895398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:56.895448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:57.584834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:06:57.584957 | orchestrator | 2026-03-18 03:06:57.584972 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-18 03:06:57.584983 | orchestrator | Wednesday 18 March 2026 03:06:56 +0000 (0:00:01.974) 0:00:24.548 ******* 2026-03-18 03:06:57.585012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:06:57.585024 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:06:57.585041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:06:57.585058 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:06:57.585075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:07:00.245986 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:07:00.246099 | orchestrator | 2026-03-18 03:07:00.246107 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-18 03:07:00.246113 | orchestrator | Wednesday 18 March 2026 03:06:57 +0000 (0:00:00.698) 0:00:25.246 ******* 2026-03-18 03:07:00.246134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:07:00.246141 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:07:00.246157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:07:00.246178 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:07:00.246187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 03:07:00.246195 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:07:00.246202 | orchestrator | 2026-03-18 03:07:00.246211 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-18 03:07:00.246249 | orchestrator | Wednesday 18 March 2026 03:06:58 +0000 (0:00:00.882) 0:00:26.129 ******* 2026-03-18 03:07:00.246269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:07:49.315529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:07:49.315660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 03:07:49.315694 | orchestrator | 2026-03-18 03:07:49.315705 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-18 03:07:49.315714 | orchestrator | Wednesday 18 March 2026 03:07:00 +0000 (0:00:01.775) 0:00:27.905 ******* 2026-03-18 03:07:49.315722 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:07:49.315731 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:07:49.315738 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:07:49.315746 | orchestrator | 2026-03-18 03:07:49.315754 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-18 03:07:49.315762 | orchestrator | Wednesday 18 March 2026 03:07:00 +0000 (0:00:00.336) 0:00:28.241 ******* 2026-03-18 03:07:49.315770 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:07:49.315778 | orchestrator | 2026-03-18 03:07:49.315786 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-18 03:07:49.315794 | orchestrator | Wednesday 18 March 2026 03:07:01 +0000 (0:00:00.583) 0:00:28.824 ******* 2026-03-18 03:07:49.315802 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:07:49.315809 | orchestrator | 2026-03-18 03:07:49.315817 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-18 03:07:49.315825 | orchestrator | Wednesday 18 March 2026 03:07:03 +0000 (0:00:02.167) 0:00:30.992 ******* 2026-03-18 03:07:49.315832 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:07:49.315840 | orchestrator | 2026-03-18 03:07:49.315848 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-18 03:07:49.315856 | orchestrator | Wednesday 18 March 2026 03:07:05 +0000 (0:00:02.613) 0:00:33.606 ******* 2026-03-18 03:07:49.315863 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:07:49.315871 | orchestrator | 2026-03-18 03:07:49.315879 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-18 03:07:49.315886 | orchestrator | Wednesday 18 March 2026 03:07:22 +0000 (0:00:16.483) 0:00:50.090 ******* 2026-03-18 03:07:49.315894 | orchestrator | 2026-03-18 03:07:49.315902 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-18 03:07:49.315915 | orchestrator | Wednesday 18 March 2026 03:07:22 +0000 (0:00:00.073) 0:00:50.163 ******* 2026-03-18 03:07:49.315923 | orchestrator | 2026-03-18 03:07:49.315930 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-18 03:07:49.315938 | orchestrator | Wednesday 18 March 2026 03:07:22 +0000 (0:00:00.069) 0:00:50.233 ******* 2026-03-18 03:07:49.315946 | orchestrator | 2026-03-18 03:07:49.315954 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-18 03:07:49.315962 | orchestrator | Wednesday 18 March 2026 03:07:22 +0000 (0:00:00.075) 0:00:50.308 ******* 2026-03-18 03:07:49.315969 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:07:49.315977 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:07:49.315985 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:07:49.315993 | orchestrator | 2026-03-18 03:07:49.316001 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:07:49.316011 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 03:07:49.316021 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-18 03:07:49.316030 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-18 03:07:49.316040 | orchestrator | 2026-03-18 03:07:49.316049 | orchestrator | 2026-03-18 03:07:49.316058 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:07:49.316067 | orchestrator | Wednesday 18 March 2026 03:07:49 +0000 (0:00:26.643) 0:01:16.951 ******* 2026-03-18 03:07:49.316076 | orchestrator | =============================================================================== 2026-03-18 03:07:49.316085 | orchestrator | horizon : Restart horizon container ------------------------------------ 26.64s 2026-03-18 03:07:49.316094 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.48s 2026-03-18 03:07:49.316104 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.61s 2026-03-18 03:07:49.316112 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2026-03-18 03:07:49.316121 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2026-03-18 03:07:49.316130 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.96s 2026-03-18 03:07:49.316140 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.85s 2026-03-18 03:07:49.316149 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.83s 2026-03-18 03:07:49.316163 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.78s 2026-03-18 03:07:49.316172 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2026-03-18 03:07:49.316182 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-03-18 03:07:49.316191 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-03-18 03:07:49.316200 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-03-18 03:07:49.316214 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-03-18 03:07:49.755763 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-03-18 03:07:49.755834 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-03-18 03:07:49.755840 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-18 03:07:49.755846 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-03-18 03:07:49.755851 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.58s 2026-03-18 03:07:49.755856 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.55s 2026-03-18 03:07:52.219875 | orchestrator | 2026-03-18 03:07:52 | INFO  | Task 520fe875-fa38-445b-bd25-a94ed47b9e35 (skyline) was prepared for execution. 2026-03-18 03:07:52.219967 | orchestrator | 2026-03-18 03:07:52 | INFO  | It takes a moment until task 520fe875-fa38-445b-bd25-a94ed47b9e35 (skyline) has been started and output is visible here. 2026-03-18 03:08:23.440538 | orchestrator | 2026-03-18 03:08:23.440670 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:08:23.440698 | orchestrator | 2026-03-18 03:08:23.440719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:08:23.440733 | orchestrator | Wednesday 18 March 2026 03:07:56 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-03-18 03:08:23.440744 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:08:23.440756 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:08:23.440767 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:08:23.440778 | orchestrator | 2026-03-18 03:08:23.440789 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:08:23.440800 | orchestrator | Wednesday 18 March 2026 03:07:57 +0000 (0:00:00.344) 0:00:00.648 ******* 2026-03-18 03:08:23.440811 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-18 03:08:23.440822 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-18 03:08:23.440833 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-18 03:08:23.440844 | orchestrator | 2026-03-18 03:08:23.440854 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-18 03:08:23.440865 | orchestrator | 2026-03-18 03:08:23.440876 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-18 03:08:23.440887 | orchestrator | Wednesday 18 March 2026 03:07:57 +0000 (0:00:00.477) 0:00:01.126 ******* 2026-03-18 03:08:23.440899 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:08:23.440911 | orchestrator | 2026-03-18 03:08:23.440921 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-18 03:08:23.440932 | orchestrator | Wednesday 18 March 2026 03:07:58 +0000 (0:00:00.625) 0:00:01.751 ******* 2026-03-18 03:08:23.440943 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-18 03:08:23.440954 | orchestrator | 2026-03-18 03:08:23.440965 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-18 03:08:23.440976 | orchestrator | Wednesday 18 March 2026 03:08:01 +0000 (0:00:03.318) 0:00:05.070 ******* 2026-03-18 03:08:23.440987 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-18 03:08:23.440998 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-18 03:08:23.441008 | orchestrator | 2026-03-18 03:08:23.441019 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-18 03:08:23.441030 | orchestrator | Wednesday 18 March 2026 03:08:07 +0000 (0:00:06.356) 0:00:11.426 ******* 2026-03-18 03:08:23.441041 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:08:23.441052 | orchestrator | 2026-03-18 03:08:23.441063 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-18 03:08:23.441074 | orchestrator | Wednesday 18 March 2026 03:08:11 +0000 (0:00:03.133) 0:00:14.560 ******* 2026-03-18 03:08:23.441087 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:08:23.441101 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-18 03:08:23.441114 | orchestrator | 2026-03-18 03:08:23.441126 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-18 03:08:23.441139 | orchestrator | Wednesday 18 March 2026 03:08:14 +0000 (0:00:03.946) 0:00:18.506 ******* 2026-03-18 03:08:23.441152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:08:23.441164 | orchestrator | 2026-03-18 03:08:23.441176 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-18 03:08:23.441216 | orchestrator | Wednesday 18 March 2026 03:08:18 +0000 (0:00:03.247) 0:00:21.753 ******* 2026-03-18 03:08:23.441230 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-18 03:08:23.441242 | orchestrator | 2026-03-18 03:08:23.441255 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-18 03:08:23.441268 | orchestrator | Wednesday 18 March 2026 03:08:22 +0000 (0:00:03.787) 0:00:25.541 ******* 2026-03-18 03:08:23.441301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:23.441340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:23.441355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:23.441398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:23.441431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:23.441454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498198 | orchestrator | 2026-03-18 03:08:27.498323 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-18 03:08:27.498342 | orchestrator | Wednesday 18 March 2026 03:08:23 +0000 (0:00:01.402) 0:00:26.943 ******* 2026-03-18 03:08:27.498356 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:08:27.498395 | orchestrator | 2026-03-18 03:08:27.498407 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-18 03:08:27.498419 | orchestrator | Wednesday 18 March 2026 03:08:24 +0000 (0:00:00.828) 0:00:27.772 ******* 2026-03-18 03:08:27.498433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:27.498621 | orchestrator | 2026-03-18 03:08:27.498635 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-18 03:08:27.498649 | orchestrator | Wednesday 18 March 2026 03:08:26 +0000 (0:00:02.573) 0:00:30.346 ******* 2026-03-18 03:08:27.498668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:27.498682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:27.498697 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:08:27.498720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906324 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:08:28.906420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906447 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:08:28.906459 | orchestrator | 2026-03-18 03:08:28.906471 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-18 03:08:28.906483 | orchestrator | Wednesday 18 March 2026 03:08:27 +0000 (0:00:00.660) 0:00:31.006 ******* 2026-03-18 03:08:28.906495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906547 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:08:28.906564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906588 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:08:28.906599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-18 03:08:28.906619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-18 03:08:37.505552 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:08:37.505662 | orchestrator | 2026-03-18 03:08:37.505679 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-18 03:08:37.505691 | orchestrator | Wednesday 18 March 2026 03:08:28 +0000 (0:00:01.403) 0:00:32.410 ******* 2026-03-18 03:08:37.505704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.505903 | orchestrator | 2026-03-18 03:08:37.505917 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-18 03:08:37.505932 | orchestrator | Wednesday 18 March 2026 03:08:31 +0000 (0:00:02.421) 0:00:34.831 ******* 2026-03-18 03:08:37.505947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-18 03:08:37.505962 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-18 03:08:37.505977 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-18 03:08:37.505992 | orchestrator | 2026-03-18 03:08:37.506008 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-18 03:08:37.506108 | orchestrator | Wednesday 18 March 2026 03:08:32 +0000 (0:00:01.565) 0:00:36.396 ******* 2026-03-18 03:08:37.506128 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-18 03:08:37.506143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-18 03:08:37.506158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-18 03:08:37.506174 | orchestrator | 2026-03-18 03:08:37.506190 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-18 03:08:37.506223 | orchestrator | Wednesday 18 March 2026 03:08:35 +0000 (0:00:02.129) 0:00:38.526 ******* 2026-03-18 03:08:37.506241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:37.506274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880287 | orchestrator | 2026-03-18 03:08:39.880296 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-18 03:08:39.880305 | orchestrator | Wednesday 18 March 2026 03:08:37 +0000 (0:00:02.488) 0:00:41.014 ******* 2026-03-18 03:08:39.880313 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:08:39.880326 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:08:39.880339 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:08:39.880436 | orchestrator | 2026-03-18 03:08:39.880469 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-18 03:08:39.880482 | orchestrator | Wednesday 18 March 2026 03:08:37 +0000 (0:00:00.342) 0:00:41.357 ******* 2026-03-18 03:08:39.880504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:08:39.880573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:09:18.270596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-18 03:09:18.270708 | orchestrator | 2026-03-18 03:09:18.270720 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-18 03:09:18.270728 | orchestrator | Wednesday 18 March 2026 03:08:39 +0000 (0:00:02.030) 0:00:43.388 ******* 2026-03-18 03:09:18.270735 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:09:18.270743 | orchestrator | 2026-03-18 03:09:18.270749 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-18 03:09:18.270755 | orchestrator | Wednesday 18 March 2026 03:08:41 +0000 (0:00:02.016) 0:00:45.405 ******* 2026-03-18 03:09:18.270761 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:09:18.270768 | orchestrator | 2026-03-18 03:09:18.270774 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-18 03:09:18.270780 | orchestrator | Wednesday 18 March 2026 03:08:44 +0000 (0:00:02.363) 0:00:47.768 ******* 2026-03-18 03:09:18.270786 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:09:18.270792 | orchestrator | 2026-03-18 03:09:18.270798 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-18 03:09:18.270804 | orchestrator | Wednesday 18 March 2026 03:08:51 +0000 (0:00:07.734) 0:00:55.503 ******* 2026-03-18 03:09:18.270810 | orchestrator | 2026-03-18 03:09:18.270817 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-18 03:09:18.270824 | orchestrator | Wednesday 18 March 2026 03:08:52 +0000 (0:00:00.082) 0:00:55.585 ******* 2026-03-18 03:09:18.270830 | orchestrator | 2026-03-18 03:09:18.270836 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-18 03:09:18.270842 | orchestrator | Wednesday 18 March 2026 03:08:52 +0000 (0:00:00.078) 0:00:55.663 ******* 2026-03-18 03:09:18.270848 | orchestrator | 2026-03-18 03:09:18.270854 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-18 03:09:18.270860 | orchestrator | Wednesday 18 March 2026 03:08:52 +0000 (0:00:00.075) 0:00:55.739 ******* 2026-03-18 03:09:18.270867 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:09:18.270873 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:09:18.270879 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:09:18.270885 | orchestrator | 2026-03-18 03:09:18.270891 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-18 03:09:18.270897 | orchestrator | Wednesday 18 March 2026 03:09:02 +0000 (0:00:10.571) 0:01:06.310 ******* 2026-03-18 03:09:18.270903 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:09:18.270909 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:09:18.270916 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:09:18.270922 | orchestrator | 2026-03-18 03:09:18.270928 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:09:18.270935 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 03:09:18.270943 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 03:09:18.270950 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 03:09:18.270956 | orchestrator | 2026-03-18 03:09:18.270962 | orchestrator | 2026-03-18 03:09:18.270968 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:09:18.270974 | orchestrator | Wednesday 18 March 2026 03:09:17 +0000 (0:00:15.085) 0:01:21.395 ******* 2026-03-18 03:09:18.270980 | orchestrator | =============================================================================== 2026-03-18 03:09:18.270986 | orchestrator | skyline : Restart skyline-console container ---------------------------- 15.09s 2026-03-18 03:09:18.270993 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 10.57s 2026-03-18 03:09:18.271004 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.73s 2026-03-18 03:09:18.271010 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.36s 2026-03-18 03:09:18.271016 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.95s 2026-03-18 03:09:18.271022 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.79s 2026-03-18 03:09:18.271028 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.32s 2026-03-18 03:09:18.271047 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.25s 2026-03-18 03:09:18.271064 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.13s 2026-03-18 03:09:18.271071 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.57s 2026-03-18 03:09:18.271077 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.49s 2026-03-18 03:09:18.271087 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.42s 2026-03-18 03:09:18.271098 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.36s 2026-03-18 03:09:18.271108 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.13s 2026-03-18 03:09:18.271114 | orchestrator | skyline : Check skyline container --------------------------------------- 2.03s 2026-03-18 03:09:18.271120 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.02s 2026-03-18 03:09:18.271126 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.57s 2026-03-18 03:09:18.271135 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.40s 2026-03-18 03:09:18.271142 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.40s 2026-03-18 03:09:18.271149 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.83s 2026-03-18 03:09:20.919249 | orchestrator | 2026-03-18 03:09:20 | INFO  | Task 457fe0b6-5568-498a-af90-d3e0a2bac462 (glance) was prepared for execution. 2026-03-18 03:09:20.919361 | orchestrator | 2026-03-18 03:09:20 | INFO  | It takes a moment until task 457fe0b6-5568-498a-af90-d3e0a2bac462 (glance) has been started and output is visible here. 2026-03-18 03:09:55.187452 | orchestrator | 2026-03-18 03:09:55.187556 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:09:55.187570 | orchestrator | 2026-03-18 03:09:55.187580 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:09:55.187590 | orchestrator | Wednesday 18 March 2026 03:09:25 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-03-18 03:09:55.187599 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:09:55.187610 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:09:55.187619 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:09:55.187628 | orchestrator | 2026-03-18 03:09:55.187637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:09:55.187645 | orchestrator | Wednesday 18 March 2026 03:09:25 +0000 (0:00:00.320) 0:00:00.599 ******* 2026-03-18 03:09:55.187654 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-18 03:09:55.187664 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-18 03:09:55.187672 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-18 03:09:55.187681 | orchestrator | 2026-03-18 03:09:55.187690 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-18 03:09:55.187699 | orchestrator | 2026-03-18 03:09:55.187707 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-18 03:09:55.187716 | orchestrator | Wednesday 18 March 2026 03:09:26 +0000 (0:00:00.486) 0:00:01.085 ******* 2026-03-18 03:09:55.187725 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:09:55.187734 | orchestrator | 2026-03-18 03:09:55.187743 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-18 03:09:55.187771 | orchestrator | Wednesday 18 March 2026 03:09:26 +0000 (0:00:00.603) 0:00:01.689 ******* 2026-03-18 03:09:55.187780 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-18 03:09:55.187788 | orchestrator | 2026-03-18 03:09:55.187797 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-18 03:09:55.187806 | orchestrator | Wednesday 18 March 2026 03:09:30 +0000 (0:00:03.535) 0:00:05.225 ******* 2026-03-18 03:09:55.187814 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-18 03:09:55.187824 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-18 03:09:55.187832 | orchestrator | 2026-03-18 03:09:55.187841 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-18 03:09:55.187849 | orchestrator | Wednesday 18 March 2026 03:09:36 +0000 (0:00:06.384) 0:00:11.609 ******* 2026-03-18 03:09:55.187859 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:09:55.187868 | orchestrator | 2026-03-18 03:09:55.187877 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-18 03:09:55.187886 | orchestrator | Wednesday 18 March 2026 03:09:39 +0000 (0:00:03.265) 0:00:14.874 ******* 2026-03-18 03:09:55.187894 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:09:55.187903 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-18 03:09:55.187912 | orchestrator | 2026-03-18 03:09:55.187920 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-18 03:09:55.187929 | orchestrator | Wednesday 18 March 2026 03:09:43 +0000 (0:00:03.909) 0:00:18.784 ******* 2026-03-18 03:09:55.187938 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:09:55.187946 | orchestrator | 2026-03-18 03:09:55.187955 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-18 03:09:55.187963 | orchestrator | Wednesday 18 March 2026 03:09:47 +0000 (0:00:03.211) 0:00:21.995 ******* 2026-03-18 03:09:55.187972 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-18 03:09:55.187981 | orchestrator | 2026-03-18 03:09:55.187989 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-18 03:09:55.187998 | orchestrator | Wednesday 18 March 2026 03:09:50 +0000 (0:00:03.731) 0:00:25.727 ******* 2026-03-18 03:09:55.188044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:09:55.188065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:09:55.188079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:09:55.188089 | orchestrator | 2026-03-18 03:09:55.188098 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-18 03:09:55.188107 | orchestrator | Wednesday 18 March 2026 03:09:54 +0000 (0:00:03.600) 0:00:29.328 ******* 2026-03-18 03:09:55.188116 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:09:55.188125 | orchestrator | 2026-03-18 03:09:55.188139 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-18 03:10:11.398164 | orchestrator | Wednesday 18 March 2026 03:09:55 +0000 (0:00:00.803) 0:00:30.131 ******* 2026-03-18 03:10:11.398374 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:10:11.398400 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:10:11.398413 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:10:11.398427 | orchestrator | 2026-03-18 03:10:11.398442 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-18 03:10:11.398457 | orchestrator | Wednesday 18 March 2026 03:09:58 +0000 (0:00:03.752) 0:00:33.884 ******* 2026-03-18 03:10:11.398472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398489 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398517 | orchestrator | 2026-03-18 03:10:11.398532 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-18 03:10:11.398546 | orchestrator | Wednesday 18 March 2026 03:10:00 +0000 (0:00:01.517) 0:00:35.402 ******* 2026-03-18 03:10:11.398560 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398589 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:10:11.398602 | orchestrator | 2026-03-18 03:10:11.398616 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-18 03:10:11.398630 | orchestrator | Wednesday 18 March 2026 03:10:01 +0000 (0:00:01.491) 0:00:36.893 ******* 2026-03-18 03:10:11.398645 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:10:11.398660 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:10:11.398674 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:10:11.398689 | orchestrator | 2026-03-18 03:10:11.398705 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-18 03:10:11.398719 | orchestrator | Wednesday 18 March 2026 03:10:02 +0000 (0:00:00.713) 0:00:37.606 ******* 2026-03-18 03:10:11.398735 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:11.398751 | orchestrator | 2026-03-18 03:10:11.398767 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-18 03:10:11.398783 | orchestrator | Wednesday 18 March 2026 03:10:02 +0000 (0:00:00.143) 0:00:37.750 ******* 2026-03-18 03:10:11.398799 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:11.398815 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:11.398830 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:11.398846 | orchestrator | 2026-03-18 03:10:11.398857 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-18 03:10:11.398868 | orchestrator | Wednesday 18 March 2026 03:10:03 +0000 (0:00:00.331) 0:00:38.081 ******* 2026-03-18 03:10:11.398878 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:10:11.398888 | orchestrator | 2026-03-18 03:10:11.398898 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-18 03:10:11.398908 | orchestrator | Wednesday 18 March 2026 03:10:03 +0000 (0:00:00.813) 0:00:38.895 ******* 2026-03-18 03:10:11.398941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:11.398990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:11.399007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:11.399023 | orchestrator | 2026-03-18 03:10:11.399033 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-18 03:10:11.399042 | orchestrator | Wednesday 18 March 2026 03:10:08 +0000 (0:00:04.137) 0:00:43.032 ******* 2026-03-18 03:10:11.399059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:15.353616 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:15.353730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:15.353782 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:15.353793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:15.353802 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:15.353811 | orchestrator | 2026-03-18 03:10:15.353820 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-18 03:10:15.353831 | orchestrator | Wednesday 18 March 2026 03:10:11 +0000 (0:00:03.311) 0:00:46.344 ******* 2026-03-18 03:10:15.353857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:15.353867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:15.353881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:15.353899 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:15.353915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 03:10:52.034918 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035028 | orchestrator | 2026-03-18 03:10:52.035044 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-18 03:10:52.035056 | orchestrator | Wednesday 18 March 2026 03:10:15 +0000 (0:00:03.952) 0:00:50.297 ******* 2026-03-18 03:10:52.035066 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035076 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035086 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035096 | orchestrator | 2026-03-18 03:10:52.035106 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-18 03:10:52.035138 | orchestrator | Wednesday 18 March 2026 03:10:18 +0000 (0:00:03.508) 0:00:53.805 ******* 2026-03-18 03:10:52.035172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:52.035252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:52.035343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:10:52.035380 | orchestrator | 2026-03-18 03:10:52.035397 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-18 03:10:52.035414 | orchestrator | Wednesday 18 March 2026 03:10:23 +0000 (0:00:04.154) 0:00:57.960 ******* 2026-03-18 03:10:52.035432 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:10:52.035444 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:10:52.035454 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:10:52.035465 | orchestrator | 2026-03-18 03:10:52.035477 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-18 03:10:52.035488 | orchestrator | Wednesday 18 March 2026 03:10:29 +0000 (0:00:06.068) 0:01:04.028 ******* 2026-03-18 03:10:52.035500 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035511 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035523 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035534 | orchestrator | 2026-03-18 03:10:52.035545 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-18 03:10:52.035556 | orchestrator | Wednesday 18 March 2026 03:10:32 +0000 (0:00:03.702) 0:01:07.731 ******* 2026-03-18 03:10:52.035568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035579 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035591 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035602 | orchestrator | 2026-03-18 03:10:52.035613 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-18 03:10:52.035624 | orchestrator | Wednesday 18 March 2026 03:10:36 +0000 (0:00:03.453) 0:01:11.185 ******* 2026-03-18 03:10:52.035636 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035648 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035659 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035670 | orchestrator | 2026-03-18 03:10:52.035681 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-18 03:10:52.035692 | orchestrator | Wednesday 18 March 2026 03:10:39 +0000 (0:00:03.464) 0:01:14.649 ******* 2026-03-18 03:10:52.035703 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035714 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035725 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035736 | orchestrator | 2026-03-18 03:10:52.035748 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-18 03:10:52.035759 | orchestrator | Wednesday 18 March 2026 03:10:43 +0000 (0:00:03.797) 0:01:18.447 ******* 2026-03-18 03:10:52.035770 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035781 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035793 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035804 | orchestrator | 2026-03-18 03:10:52.035816 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-18 03:10:52.035827 | orchestrator | Wednesday 18 March 2026 03:10:44 +0000 (0:00:00.604) 0:01:19.051 ******* 2026-03-18 03:10:52.035847 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-18 03:10:52.035858 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:10:52.035868 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-18 03:10:52.035878 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:10:52.035888 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-18 03:10:52.035897 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:10:52.035907 | orchestrator | 2026-03-18 03:10:52.035917 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-18 03:10:52.035926 | orchestrator | Wednesday 18 March 2026 03:10:47 +0000 (0:00:03.454) 0:01:22.505 ******* 2026-03-18 03:10:52.035936 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:10:52.035946 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:10:52.035955 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:10:52.035965 | orchestrator | 2026-03-18 03:10:52.035975 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-18 03:10:52.035993 | orchestrator | Wednesday 18 March 2026 03:10:52 +0000 (0:00:04.473) 0:01:26.979 ******* 2026-03-18 03:12:10.877850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:12:10.877970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:12:10.878110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 03:12:10.878225 | orchestrator | 2026-03-18 03:12:10.878247 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-18 03:12:10.878268 | orchestrator | Wednesday 18 March 2026 03:10:55 +0000 (0:00:03.856) 0:01:30.835 ******* 2026-03-18 03:12:10.878287 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:12:10.878307 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:12:10.878326 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:12:10.878345 | orchestrator | 2026-03-18 03:12:10.878365 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-18 03:12:10.878384 | orchestrator | Wednesday 18 March 2026 03:10:56 +0000 (0:00:00.584) 0:01:31.420 ******* 2026-03-18 03:12:10.878402 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878421 | orchestrator | 2026-03-18 03:12:10.878439 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-18 03:12:10.878457 | orchestrator | Wednesday 18 March 2026 03:10:58 +0000 (0:00:02.035) 0:01:33.455 ******* 2026-03-18 03:12:10.878475 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878493 | orchestrator | 2026-03-18 03:12:10.878510 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-18 03:12:10.878530 | orchestrator | Wednesday 18 March 2026 03:11:00 +0000 (0:00:02.175) 0:01:35.630 ******* 2026-03-18 03:12:10.878549 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878566 | orchestrator | 2026-03-18 03:12:10.878583 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-18 03:12:10.878602 | orchestrator | Wednesday 18 March 2026 03:11:02 +0000 (0:00:01.984) 0:01:37.615 ******* 2026-03-18 03:12:10.878636 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878655 | orchestrator | 2026-03-18 03:12:10.878674 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-18 03:12:10.878694 | orchestrator | Wednesday 18 March 2026 03:11:30 +0000 (0:00:27.703) 0:02:05.319 ******* 2026-03-18 03:12:10.878711 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878729 | orchestrator | 2026-03-18 03:12:10.878744 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-18 03:12:10.878763 | orchestrator | Wednesday 18 March 2026 03:11:32 +0000 (0:00:02.141) 0:02:07.460 ******* 2026-03-18 03:12:10.878779 | orchestrator | 2026-03-18 03:12:10.878794 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-18 03:12:10.878810 | orchestrator | Wednesday 18 March 2026 03:11:32 +0000 (0:00:00.082) 0:02:07.543 ******* 2026-03-18 03:12:10.878827 | orchestrator | 2026-03-18 03:12:10.878843 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-18 03:12:10.878859 | orchestrator | Wednesday 18 March 2026 03:11:32 +0000 (0:00:00.070) 0:02:07.613 ******* 2026-03-18 03:12:10.878876 | orchestrator | 2026-03-18 03:12:10.878893 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-18 03:12:10.878911 | orchestrator | Wednesday 18 March 2026 03:11:32 +0000 (0:00:00.072) 0:02:07.686 ******* 2026-03-18 03:12:10.878928 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:12:10.878946 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:12:10.878964 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:12:10.878982 | orchestrator | 2026-03-18 03:12:10.878999 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:12:10.879018 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 03:12:10.879038 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-18 03:12:10.879053 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-18 03:12:10.879069 | orchestrator | 2026-03-18 03:12:10.879085 | orchestrator | 2026-03-18 03:12:10.879101 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:12:10.879166 | orchestrator | Wednesday 18 March 2026 03:12:10 +0000 (0:00:38.130) 0:02:45.816 ******* 2026-03-18 03:12:10.879185 | orchestrator | =============================================================================== 2026-03-18 03:12:10.879202 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.13s 2026-03-18 03:12:10.879220 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.70s 2026-03-18 03:12:10.879238 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.38s 2026-03-18 03:12:10.879281 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.07s 2026-03-18 03:12:11.291306 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.47s 2026-03-18 03:12:11.291415 | orchestrator | glance : Copying over config.json files for services -------------------- 4.15s 2026-03-18 03:12:11.291431 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.14s 2026-03-18 03:12:11.291444 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.95s 2026-03-18 03:12:11.291458 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.91s 2026-03-18 03:12:11.291471 | orchestrator | glance : Check glance containers ---------------------------------------- 3.86s 2026-03-18 03:12:11.291484 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.80s 2026-03-18 03:12:11.291497 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.75s 2026-03-18 03:12:11.291511 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.73s 2026-03-18 03:12:11.291561 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.70s 2026-03-18 03:12:11.291570 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.60s 2026-03-18 03:12:11.291578 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.54s 2026-03-18 03:12:11.291586 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.51s 2026-03-18 03:12:11.291595 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.46s 2026-03-18 03:12:11.291603 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.45s 2026-03-18 03:12:11.291611 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.45s 2026-03-18 03:12:13.933938 | orchestrator | 2026-03-18 03:12:13 | INFO  | Task 79ac7a4b-4875-4cd6-b62a-301197e92270 (cinder) was prepared for execution. 2026-03-18 03:12:13.934106 | orchestrator | 2026-03-18 03:12:13 | INFO  | It takes a moment until task 79ac7a4b-4875-4cd6-b62a-301197e92270 (cinder) has been started and output is visible here. 2026-03-18 03:12:49.392985 | orchestrator | 2026-03-18 03:12:49.393133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:12:49.393146 | orchestrator | 2026-03-18 03:12:49.393151 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:12:49.393157 | orchestrator | Wednesday 18 March 2026 03:12:18 +0000 (0:00:00.268) 0:00:00.268 ******* 2026-03-18 03:12:49.393162 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:12:49.393169 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:12:49.393174 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:12:49.393179 | orchestrator | 2026-03-18 03:12:49.393184 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:12:49.393189 | orchestrator | Wednesday 18 March 2026 03:12:18 +0000 (0:00:00.336) 0:00:00.604 ******* 2026-03-18 03:12:49.393194 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-18 03:12:49.393199 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-18 03:12:49.393204 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-18 03:12:49.393209 | orchestrator | 2026-03-18 03:12:49.393214 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-18 03:12:49.393219 | orchestrator | 2026-03-18 03:12:49.393223 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-18 03:12:49.393228 | orchestrator | Wednesday 18 March 2026 03:12:19 +0000 (0:00:00.461) 0:00:01.066 ******* 2026-03-18 03:12:49.393233 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:12:49.393239 | orchestrator | 2026-03-18 03:12:49.393244 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-18 03:12:49.393248 | orchestrator | Wednesday 18 March 2026 03:12:19 +0000 (0:00:00.613) 0:00:01.680 ******* 2026-03-18 03:12:49.393254 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-18 03:12:49.393259 | orchestrator | 2026-03-18 03:12:49.393264 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-18 03:12:49.393269 | orchestrator | Wednesday 18 March 2026 03:12:23 +0000 (0:00:03.368) 0:00:05.048 ******* 2026-03-18 03:12:49.393274 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-18 03:12:49.393280 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-18 03:12:49.393286 | orchestrator | 2026-03-18 03:12:49.393290 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-18 03:12:49.393295 | orchestrator | Wednesday 18 March 2026 03:12:29 +0000 (0:00:06.345) 0:00:11.394 ******* 2026-03-18 03:12:49.393300 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:12:49.393305 | orchestrator | 2026-03-18 03:12:49.393310 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-18 03:12:49.393331 | orchestrator | Wednesday 18 March 2026 03:12:32 +0000 (0:00:03.302) 0:00:14.697 ******* 2026-03-18 03:12:49.393336 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:12:49.393341 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-18 03:12:49.393346 | orchestrator | 2026-03-18 03:12:49.393351 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-18 03:12:49.393356 | orchestrator | Wednesday 18 March 2026 03:12:36 +0000 (0:00:04.090) 0:00:18.787 ******* 2026-03-18 03:12:49.393361 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:12:49.393366 | orchestrator | 2026-03-18 03:12:49.393370 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-18 03:12:49.393375 | orchestrator | Wednesday 18 March 2026 03:12:40 +0000 (0:00:03.146) 0:00:21.934 ******* 2026-03-18 03:12:49.393380 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-18 03:12:49.393385 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-18 03:12:49.393390 | orchestrator | 2026-03-18 03:12:49.393394 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-18 03:12:49.393399 | orchestrator | Wednesday 18 March 2026 03:12:47 +0000 (0:00:07.303) 0:00:29.238 ******* 2026-03-18 03:12:49.393418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:12:49.393439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:12:49.393445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:12:49.393456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:49.393462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:49.393470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:49.393476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:49.393485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:55.493295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:55.493470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:55.493503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:55.493544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:12:55.493559 | orchestrator | 2026-03-18 03:12:55.493572 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-18 03:12:55.493585 | orchestrator | Wednesday 18 March 2026 03:12:49 +0000 (0:00:02.125) 0:00:31.363 ******* 2026-03-18 03:12:55.493596 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:12:55.493608 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:12:55.493619 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:12:55.493630 | orchestrator | 2026-03-18 03:12:55.493641 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-18 03:12:55.493652 | orchestrator | Wednesday 18 March 2026 03:12:50 +0000 (0:00:00.546) 0:00:31.910 ******* 2026-03-18 03:12:55.493663 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:12:55.493674 | orchestrator | 2026-03-18 03:12:55.493685 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-18 03:12:55.493696 | orchestrator | Wednesday 18 March 2026 03:12:50 +0000 (0:00:00.566) 0:00:32.476 ******* 2026-03-18 03:12:55.493707 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-18 03:12:55.493719 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-18 03:12:55.493730 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-18 03:12:55.493741 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-18 03:12:55.493752 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-18 03:12:55.493762 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-18 03:12:55.493773 | orchestrator | 2026-03-18 03:12:55.493784 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-18 03:12:55.493804 | orchestrator | Wednesday 18 March 2026 03:12:52 +0000 (0:00:01.710) 0:00:34.186 ******* 2026-03-18 03:12:55.493840 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:12:55.493859 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:12:55.493880 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:12:55.493914 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:12:55.493952 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:13:06.646229 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-18 03:13:06.646339 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646353 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646380 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646390 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646436 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646443 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-18 03:13:06.646450 | orchestrator | 2026-03-18 03:13:06.646457 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-18 03:13:06.646466 | orchestrator | Wednesday 18 March 2026 03:12:55 +0000 (0:00:03.551) 0:00:37.738 ******* 2026-03-18 03:13:06.646472 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:13:06.646480 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:13:06.646486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-18 03:13:06.646492 | orchestrator | 2026-03-18 03:13:06.646499 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-18 03:13:06.646505 | orchestrator | Wednesday 18 March 2026 03:12:57 +0000 (0:00:01.554) 0:00:39.293 ******* 2026-03-18 03:13:06.646513 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-18 03:13:06.646520 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-18 03:13:06.646526 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-18 03:13:06.646533 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 03:13:06.646540 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 03:13:06.646546 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-18 03:13:06.646554 | orchestrator | 2026-03-18 03:13:06.646561 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-18 03:13:06.646573 | orchestrator | Wednesday 18 March 2026 03:13:00 +0000 (0:00:02.774) 0:00:42.067 ******* 2026-03-18 03:13:06.646582 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-18 03:13:06.646590 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-18 03:13:06.646597 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-18 03:13:06.646604 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-18 03:13:06.646611 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-18 03:13:06.646618 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-18 03:13:06.646631 | orchestrator | 2026-03-18 03:13:06.646638 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-18 03:13:06.646645 | orchestrator | Wednesday 18 March 2026 03:13:01 +0000 (0:00:01.044) 0:00:43.111 ******* 2026-03-18 03:13:06.646652 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:06.646659 | orchestrator | 2026-03-18 03:13:06.646666 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-18 03:13:06.646672 | orchestrator | Wednesday 18 March 2026 03:13:01 +0000 (0:00:00.148) 0:00:43.260 ******* 2026-03-18 03:13:06.646679 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:06.646685 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:13:06.646692 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:13:06.646698 | orchestrator | 2026-03-18 03:13:06.646705 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-18 03:13:06.646713 | orchestrator | Wednesday 18 March 2026 03:13:01 +0000 (0:00:00.544) 0:00:43.805 ******* 2026-03-18 03:13:06.646720 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:13:06.646728 | orchestrator | 2026-03-18 03:13:06.646736 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-18 03:13:06.646743 | orchestrator | Wednesday 18 March 2026 03:13:02 +0000 (0:00:00.612) 0:00:44.417 ******* 2026-03-18 03:13:06.646762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:07.593426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:07.593528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:07.593586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:07.593732 | orchestrator | 2026-03-18 03:13:07.593743 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-18 03:13:07.593757 | orchestrator | Wednesday 18 March 2026 03:13:06 +0000 (0:00:04.198) 0:00:48.616 ******* 2026-03-18 03:13:07.593778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:07.701119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701306 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:07.701323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:07.701340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701425 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:13:07.701449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:07.701460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:07.701488 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:13:07.701497 | orchestrator | 2026-03-18 03:13:07.701507 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-18 03:13:07.701531 | orchestrator | Wednesday 18 March 2026 03:13:07 +0000 (0:00:00.969) 0:00:49.586 ******* 2026-03-18 03:13:08.312646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:08.312731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312751 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:08.312756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:08.312794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312811 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:13:08.312815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:08.312819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:08.312827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:13.053625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:13.053719 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:13:13.053732 | orchestrator | 2026-03-18 03:13:13.053741 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-18 03:13:13.053751 | orchestrator | Wednesday 18 March 2026 03:13:08 +0000 (0:00:00.937) 0:00:50.524 ******* 2026-03-18 03:13:13.053774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:13.053785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:13.053794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:13.053838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:13.053909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628745 | orchestrator | 2026-03-18 03:13:26.628755 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-18 03:13:26.628763 | orchestrator | Wednesday 18 March 2026 03:13:13 +0000 (0:00:04.498) 0:00:55.023 ******* 2026-03-18 03:13:26.628771 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-18 03:13:26.628780 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-18 03:13:26.628786 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-18 03:13:26.628793 | orchestrator | 2026-03-18 03:13:26.628800 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-18 03:13:26.628807 | orchestrator | Wednesday 18 March 2026 03:13:15 +0000 (0:00:01.964) 0:00:56.987 ******* 2026-03-18 03:13:26.628816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:26.628846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:26.628871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:26.628884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:26.628936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:29.332320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:29.332406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:29.332414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:29.332438 | orchestrator | 2026-03-18 03:13:29.332454 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-18 03:13:29.332462 | orchestrator | Wednesday 18 March 2026 03:13:26 +0000 (0:00:11.665) 0:01:08.653 ******* 2026-03-18 03:13:29.332468 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:13:29.332475 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:13:29.332481 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:13:29.332486 | orchestrator | 2026-03-18 03:13:29.332492 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-18 03:13:29.332498 | orchestrator | Wednesday 18 March 2026 03:13:28 +0000 (0:00:01.622) 0:01:10.276 ******* 2026-03-18 03:13:29.332505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:29.332513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:29.332536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:29.332544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:29.332554 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:29.332561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:29.332567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:29.332573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:29.332588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:33.111290 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:13:33.111397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-18 03:13:33.111435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:13:33.111446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 03:13:33.111456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 03:13:33.111465 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:13:33.111473 | orchestrator | 2026-03-18 03:13:33.111482 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-18 03:13:33.111493 | orchestrator | Wednesday 18 March 2026 03:13:29 +0000 (0:00:01.028) 0:01:11.304 ******* 2026-03-18 03:13:33.111501 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:13:33.111509 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:13:33.111517 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:13:33.111525 | orchestrator | 2026-03-18 03:13:33.111533 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-18 03:13:33.111541 | orchestrator | Wednesday 18 March 2026 03:13:30 +0000 (0:00:00.647) 0:01:11.952 ******* 2026-03-18 03:13:33.111582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:33.111592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:33.111608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-18 03:13:33.111617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:33.111625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:33.111632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:13:33.111652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-18 03:15:12.259651 | orchestrator | 2026-03-18 03:15:12.259672 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-18 03:15:12.259723 | orchestrator | Wednesday 18 March 2026 03:13:33 +0000 (0:00:03.141) 0:01:15.093 ******* 2026-03-18 03:15:12.259744 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:15:12.259765 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:15:12.259783 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:15:12.259802 | orchestrator | 2026-03-18 03:15:12.259820 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-18 03:15:12.259838 | orchestrator | Wednesday 18 March 2026 03:13:33 +0000 (0:00:00.326) 0:01:15.419 ******* 2026-03-18 03:15:12.259856 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.259873 | orchestrator | 2026-03-18 03:15:12.259913 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-18 03:15:12.259934 | orchestrator | Wednesday 18 March 2026 03:13:35 +0000 (0:00:01.841) 0:01:17.260 ******* 2026-03-18 03:15:12.259987 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260007 | orchestrator | 2026-03-18 03:15:12.260025 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-18 03:15:12.260045 | orchestrator | Wednesday 18 March 2026 03:13:37 +0000 (0:00:01.853) 0:01:19.113 ******* 2026-03-18 03:15:12.260062 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260081 | orchestrator | 2026-03-18 03:15:12.260099 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-18 03:15:12.260118 | orchestrator | Wednesday 18 March 2026 03:13:56 +0000 (0:00:19.726) 0:01:38.840 ******* 2026-03-18 03:15:12.260137 | orchestrator | 2026-03-18 03:15:12.260156 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-18 03:15:12.260171 | orchestrator | Wednesday 18 March 2026 03:13:57 +0000 (0:00:00.071) 0:01:38.912 ******* 2026-03-18 03:15:12.260181 | orchestrator | 2026-03-18 03:15:12.260192 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-18 03:15:12.260202 | orchestrator | Wednesday 18 March 2026 03:13:57 +0000 (0:00:00.070) 0:01:38.983 ******* 2026-03-18 03:15:12.260213 | orchestrator | 2026-03-18 03:15:12.260223 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-18 03:15:12.260234 | orchestrator | Wednesday 18 March 2026 03:13:57 +0000 (0:00:00.075) 0:01:39.058 ******* 2026-03-18 03:15:12.260245 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260255 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:15:12.260266 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:15:12.260276 | orchestrator | 2026-03-18 03:15:12.260287 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-18 03:15:12.260298 | orchestrator | Wednesday 18 March 2026 03:14:28 +0000 (0:00:30.913) 0:02:09.972 ******* 2026-03-18 03:15:12.260308 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:15:12.260319 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260330 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:15:12.260340 | orchestrator | 2026-03-18 03:15:12.260351 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-18 03:15:12.260361 | orchestrator | Wednesday 18 March 2026 03:14:38 +0000 (0:00:10.694) 0:02:20.666 ******* 2026-03-18 03:15:12.260372 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260382 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:15:12.260393 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:15:12.260403 | orchestrator | 2026-03-18 03:15:12.260414 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-18 03:15:12.260425 | orchestrator | Wednesday 18 March 2026 03:15:05 +0000 (0:00:26.833) 0:02:47.500 ******* 2026-03-18 03:15:12.260435 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:15:12.260446 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:15:12.260457 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:15:12.260468 | orchestrator | 2026-03-18 03:15:12.260478 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-18 03:15:12.260489 | orchestrator | Wednesday 18 March 2026 03:15:11 +0000 (0:00:06.317) 0:02:53.818 ******* 2026-03-18 03:15:12.260511 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:15:12.260522 | orchestrator | 2026-03-18 03:15:12.260533 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:15:12.260545 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-18 03:15:12.260557 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:15:12.260568 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:15:12.260578 | orchestrator | 2026-03-18 03:15:12.260589 | orchestrator | 2026-03-18 03:15:12.260600 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:15:12.260610 | orchestrator | Wednesday 18 March 2026 03:15:12 +0000 (0:00:00.312) 0:02:54.130 ******* 2026-03-18 03:15:12.260621 | orchestrator | =============================================================================== 2026-03-18 03:15:12.260632 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.91s 2026-03-18 03:15:12.260642 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.83s 2026-03-18 03:15:12.260653 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.73s 2026-03-18 03:15:12.260664 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.67s 2026-03-18 03:15:12.260675 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.69s 2026-03-18 03:15:12.260685 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.30s 2026-03-18 03:15:12.260696 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.35s 2026-03-18 03:15:12.260714 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.32s 2026-03-18 03:15:12.260725 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.50s 2026-03-18 03:15:12.260736 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.20s 2026-03-18 03:15:12.260746 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.09s 2026-03-18 03:15:12.260757 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.55s 2026-03-18 03:15:12.260768 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.37s 2026-03-18 03:15:12.260778 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.30s 2026-03-18 03:15:12.260800 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.15s 2026-03-18 03:15:12.705486 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.14s 2026-03-18 03:15:12.705578 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.77s 2026-03-18 03:15:12.705588 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.13s 2026-03-18 03:15:12.705597 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 1.96s 2026-03-18 03:15:12.705605 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 1.85s 2026-03-18 03:15:15.572421 | orchestrator | 2026-03-18 03:15:15 | INFO  | Task 365a5810-017a-4feb-b056-5fe2539f5e1a (barbican) was prepared for execution. 2026-03-18 03:15:15.572508 | orchestrator | 2026-03-18 03:15:15 | INFO  | It takes a moment until task 365a5810-017a-4feb-b056-5fe2539f5e1a (barbican) has been started and output is visible here. 2026-03-18 03:16:00.111808 | orchestrator | 2026-03-18 03:16:00.112006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:16:00.112032 | orchestrator | 2026-03-18 03:16:00.112048 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:16:00.112063 | orchestrator | Wednesday 18 March 2026 03:15:20 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-03-18 03:16:00.112108 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:16:00.112126 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:16:00.112142 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:16:00.112158 | orchestrator | 2026-03-18 03:16:00.112174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:16:00.112190 | orchestrator | Wednesday 18 March 2026 03:15:20 +0000 (0:00:00.349) 0:00:00.662 ******* 2026-03-18 03:16:00.112206 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-18 03:16:00.112223 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-18 03:16:00.112239 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-18 03:16:00.112254 | orchestrator | 2026-03-18 03:16:00.112270 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-18 03:16:00.112286 | orchestrator | 2026-03-18 03:16:00.112301 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-18 03:16:00.112319 | orchestrator | Wednesday 18 March 2026 03:15:21 +0000 (0:00:00.462) 0:00:01.125 ******* 2026-03-18 03:16:00.112337 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:16:00.112356 | orchestrator | 2026-03-18 03:16:00.112373 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-18 03:16:00.112392 | orchestrator | Wednesday 18 March 2026 03:15:21 +0000 (0:00:00.600) 0:00:01.725 ******* 2026-03-18 03:16:00.112411 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-18 03:16:00.112429 | orchestrator | 2026-03-18 03:16:00.112448 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-18 03:16:00.112466 | orchestrator | Wednesday 18 March 2026 03:15:25 +0000 (0:00:03.426) 0:00:05.152 ******* 2026-03-18 03:16:00.112484 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-18 03:16:00.112502 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-18 03:16:00.112518 | orchestrator | 2026-03-18 03:16:00.112534 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-18 03:16:00.112548 | orchestrator | Wednesday 18 March 2026 03:15:31 +0000 (0:00:06.412) 0:00:11.565 ******* 2026-03-18 03:16:00.112563 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:16:00.112578 | orchestrator | 2026-03-18 03:16:00.112592 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-18 03:16:00.112607 | orchestrator | Wednesday 18 March 2026 03:15:34 +0000 (0:00:03.226) 0:00:14.791 ******* 2026-03-18 03:16:00.112620 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:16:00.112635 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-18 03:16:00.112649 | orchestrator | 2026-03-18 03:16:00.112663 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-18 03:16:00.112678 | orchestrator | Wednesday 18 March 2026 03:15:38 +0000 (0:00:04.190) 0:00:18.982 ******* 2026-03-18 03:16:00.112692 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:16:00.112707 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-18 03:16:00.112721 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-18 03:16:00.112734 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-18 03:16:00.112749 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-18 03:16:00.112764 | orchestrator | 2026-03-18 03:16:00.112779 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-18 03:16:00.112815 | orchestrator | Wednesday 18 March 2026 03:15:54 +0000 (0:00:15.449) 0:00:34.431 ******* 2026-03-18 03:16:00.112831 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-18 03:16:00.112846 | orchestrator | 2026-03-18 03:16:00.112859 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-18 03:16:00.112888 | orchestrator | Wednesday 18 March 2026 03:15:58 +0000 (0:00:03.997) 0:00:38.428 ******* 2026-03-18 03:16:00.112940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:00.112988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:00.113001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:00.113011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:00.113027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:00.113048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:00.113065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.268636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.268794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.268819 | orchestrator | 2026-03-18 03:16:06.268837 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-18 03:16:06.268853 | orchestrator | Wednesday 18 March 2026 03:16:00 +0000 (0:00:01.691) 0:00:40.120 ******* 2026-03-18 03:16:06.268870 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-18 03:16:06.268886 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-18 03:16:06.268926 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-18 03:16:06.268942 | orchestrator | 2026-03-18 03:16:06.268957 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-18 03:16:06.268972 | orchestrator | Wednesday 18 March 2026 03:16:01 +0000 (0:00:01.343) 0:00:41.463 ******* 2026-03-18 03:16:06.268986 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:06.269000 | orchestrator | 2026-03-18 03:16:06.269014 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-18 03:16:06.269029 | orchestrator | Wednesday 18 March 2026 03:16:01 +0000 (0:00:00.362) 0:00:41.826 ******* 2026-03-18 03:16:06.269043 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:06.269059 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:16:06.269102 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:16:06.269118 | orchestrator | 2026-03-18 03:16:06.269134 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-18 03:16:06.269150 | orchestrator | Wednesday 18 March 2026 03:16:02 +0000 (0:00:00.323) 0:00:42.150 ******* 2026-03-18 03:16:06.269167 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:16:06.269183 | orchestrator | 2026-03-18 03:16:06.269212 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-18 03:16:06.269256 | orchestrator | Wednesday 18 March 2026 03:16:02 +0000 (0:00:00.581) 0:00:42.732 ******* 2026-03-18 03:16:06.269274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:06.269311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:06.269327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:06.269342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.269374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.269389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.269403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:06.269426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:07.736660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:07.736757 | orchestrator | 2026-03-18 03:16:07.736771 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-18 03:16:07.736782 | orchestrator | Wednesday 18 March 2026 03:16:06 +0000 (0:00:03.539) 0:00:46.271 ******* 2026-03-18 03:16:07.736793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:07.736842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.736853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.736862 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:07.736873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:07.736942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.736955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.736972 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:16:07.736981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:07.736995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.737005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:07.737014 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:16:07.737023 | orchestrator | 2026-03-18 03:16:07.737032 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-18 03:16:07.737041 | orchestrator | Wednesday 18 March 2026 03:16:06 +0000 (0:00:00.609) 0:00:46.881 ******* 2026-03-18 03:16:07.737060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:11.350636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350783 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:11.350809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:11.350820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350839 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:16:11.350865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:11.350883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:11.350963 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:16:11.350973 | orchestrator | 2026-03-18 03:16:11.350984 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-18 03:16:11.350995 | orchestrator | Wednesday 18 March 2026 03:16:07 +0000 (0:00:00.871) 0:00:47.752 ******* 2026-03-18 03:16:11.351005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:11.351017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:11.351040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:21.207749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.207877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.207950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.207964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.207978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.208011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:21.208024 | orchestrator | 2026-03-18 03:16:21.208037 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-18 03:16:21.208050 | orchestrator | Wednesday 18 March 2026 03:16:11 +0000 (0:00:03.610) 0:00:51.363 ******* 2026-03-18 03:16:21.208062 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:16:21.208074 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:16:21.208085 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:16:21.208095 | orchestrator | 2026-03-18 03:16:21.208123 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-18 03:16:21.208135 | orchestrator | Wednesday 18 March 2026 03:16:12 +0000 (0:00:01.528) 0:00:52.891 ******* 2026-03-18 03:16:21.208146 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:16:21.208157 | orchestrator | 2026-03-18 03:16:21.208168 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-18 03:16:21.208178 | orchestrator | Wednesday 18 March 2026 03:16:13 +0000 (0:00:01.060) 0:00:53.952 ******* 2026-03-18 03:16:21.208189 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:21.208200 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:16:21.208211 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:16:21.208221 | orchestrator | 2026-03-18 03:16:21.208232 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-18 03:16:21.208243 | orchestrator | Wednesday 18 March 2026 03:16:14 +0000 (0:00:00.661) 0:00:54.614 ******* 2026-03-18 03:16:21.208362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:21.208386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:21.208408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:21.208429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:22.121344 | orchestrator | 2026-03-18 03:16:22.121356 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-18 03:16:22.121367 | orchestrator | Wednesday 18 March 2026 03:16:21 +0000 (0:00:06.607) 0:01:01.221 ******* 2026-03-18 03:16:22.121394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:22.121406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:22.121423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:22.121434 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:16:22.121446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:22.121467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:22.121478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:22.121488 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:16:22.121507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-18 03:16:24.508563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:16:24.508670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:16:24.508711 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:16:24.508739 | orchestrator | 2026-03-18 03:16:24.508751 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-18 03:16:24.508763 | orchestrator | Wednesday 18 March 2026 03:16:22 +0000 (0:00:00.911) 0:01:02.133 ******* 2026-03-18 03:16:24.508775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:24.508788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:24.508818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-18 03:16:24.508836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:16:24.508944 | orchestrator | 2026-03-18 03:16:24.508956 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-18 03:16:24.508975 | orchestrator | Wednesday 18 March 2026 03:16:24 +0000 (0:00:02.379) 0:01:04.513 ******* 2026-03-18 03:17:13.358617 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:17:13.358735 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:17:13.358750 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:17:13.358763 | orchestrator | 2026-03-18 03:17:13.358776 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-18 03:17:13.358788 | orchestrator | Wednesday 18 March 2026 03:16:24 +0000 (0:00:00.324) 0:01:04.837 ******* 2026-03-18 03:17:13.358799 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.358809 | orchestrator | 2026-03-18 03:17:13.358836 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-18 03:17:13.358901 | orchestrator | Wednesday 18 March 2026 03:16:26 +0000 (0:00:02.130) 0:01:06.968 ******* 2026-03-18 03:17:13.358914 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.358925 | orchestrator | 2026-03-18 03:17:13.358935 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-18 03:17:13.358946 | orchestrator | Wednesday 18 March 2026 03:16:29 +0000 (0:00:02.285) 0:01:09.254 ******* 2026-03-18 03:17:13.358965 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.358984 | orchestrator | 2026-03-18 03:17:13.359002 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-18 03:17:13.359019 | orchestrator | Wednesday 18 March 2026 03:16:41 +0000 (0:00:12.138) 0:01:21.392 ******* 2026-03-18 03:17:13.359036 | orchestrator | 2026-03-18 03:17:13.359053 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-18 03:17:13.359070 | orchestrator | Wednesday 18 March 2026 03:16:41 +0000 (0:00:00.095) 0:01:21.488 ******* 2026-03-18 03:17:13.359085 | orchestrator | 2026-03-18 03:17:13.359100 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-18 03:17:13.359116 | orchestrator | Wednesday 18 March 2026 03:16:41 +0000 (0:00:00.070) 0:01:21.558 ******* 2026-03-18 03:17:13.359133 | orchestrator | 2026-03-18 03:17:13.359150 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-18 03:17:13.359169 | orchestrator | Wednesday 18 March 2026 03:16:41 +0000 (0:00:00.073) 0:01:21.632 ******* 2026-03-18 03:17:13.359187 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.359207 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:17:13.359227 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:17:13.359246 | orchestrator | 2026-03-18 03:17:13.359267 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-18 03:17:13.359286 | orchestrator | Wednesday 18 March 2026 03:16:52 +0000 (0:00:11.051) 0:01:32.684 ******* 2026-03-18 03:17:13.359303 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.359323 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:17:13.359342 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:17:13.359362 | orchestrator | 2026-03-18 03:17:13.359382 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-18 03:17:13.359401 | orchestrator | Wednesday 18 March 2026 03:17:02 +0000 (0:00:09.812) 0:01:42.496 ******* 2026-03-18 03:17:13.359420 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:17:13.359431 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:17:13.359442 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:17:13.359453 | orchestrator | 2026-03-18 03:17:13.359463 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:17:13.359478 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:17:13.359498 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:17:13.359516 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:17:13.359534 | orchestrator | 2026-03-18 03:17:13.359552 | orchestrator | 2026-03-18 03:17:13.359568 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:17:13.359586 | orchestrator | Wednesday 18 March 2026 03:17:12 +0000 (0:00:10.468) 0:01:52.965 ******* 2026-03-18 03:17:13.359605 | orchestrator | =============================================================================== 2026-03-18 03:17:13.359620 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.45s 2026-03-18 03:17:13.359636 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.14s 2026-03-18 03:17:13.359652 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.05s 2026-03-18 03:17:13.359685 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.47s 2026-03-18 03:17:13.359704 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.81s 2026-03-18 03:17:13.359721 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.61s 2026-03-18 03:17:13.359738 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.41s 2026-03-18 03:17:13.359757 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.19s 2026-03-18 03:17:13.359774 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.00s 2026-03-18 03:17:13.359793 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.61s 2026-03-18 03:17:13.359811 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.54s 2026-03-18 03:17:13.359829 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.43s 2026-03-18 03:17:13.359876 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.23s 2026-03-18 03:17:13.359897 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.38s 2026-03-18 03:17:13.359915 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.29s 2026-03-18 03:17:13.359962 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.13s 2026-03-18 03:17:13.359983 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.69s 2026-03-18 03:17:13.359995 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.53s 2026-03-18 03:17:13.360006 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.34s 2026-03-18 03:17:13.360017 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.06s 2026-03-18 03:17:15.856164 | orchestrator | 2026-03-18 03:17:15 | INFO  | Task 52ff7cad-0527-4955-8a36-2f805db878e1 (designate) was prepared for execution. 2026-03-18 03:17:15.856288 | orchestrator | 2026-03-18 03:17:15 | INFO  | It takes a moment until task 52ff7cad-0527-4955-8a36-2f805db878e1 (designate) has been started and output is visible here. 2026-03-18 03:17:47.633394 | orchestrator | 2026-03-18 03:17:47.633482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:17:47.633492 | orchestrator | 2026-03-18 03:17:47.633498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:17:47.633504 | orchestrator | Wednesday 18 March 2026 03:17:20 +0000 (0:00:00.312) 0:00:00.312 ******* 2026-03-18 03:17:47.633510 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:17:47.633517 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:17:47.633522 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:17:47.633528 | orchestrator | 2026-03-18 03:17:47.633533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:17:47.633539 | orchestrator | Wednesday 18 March 2026 03:17:20 +0000 (0:00:00.335) 0:00:00.647 ******* 2026-03-18 03:17:47.633546 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-18 03:17:47.633551 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-18 03:17:47.633557 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-18 03:17:47.633562 | orchestrator | 2026-03-18 03:17:47.633568 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-18 03:17:47.633573 | orchestrator | 2026-03-18 03:17:47.633579 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-18 03:17:47.633584 | orchestrator | Wednesday 18 March 2026 03:17:21 +0000 (0:00:00.475) 0:00:01.122 ******* 2026-03-18 03:17:47.633590 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:17:47.633596 | orchestrator | 2026-03-18 03:17:47.633602 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-18 03:17:47.633607 | orchestrator | Wednesday 18 March 2026 03:17:21 +0000 (0:00:00.598) 0:00:01.721 ******* 2026-03-18 03:17:47.633629 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-18 03:17:47.633635 | orchestrator | 2026-03-18 03:17:47.633641 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-18 03:17:47.633646 | orchestrator | Wednesday 18 March 2026 03:17:25 +0000 (0:00:03.326) 0:00:05.047 ******* 2026-03-18 03:17:47.633652 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-18 03:17:47.633658 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-18 03:17:47.633663 | orchestrator | 2026-03-18 03:17:47.633668 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-18 03:17:47.633674 | orchestrator | Wednesday 18 March 2026 03:17:31 +0000 (0:00:06.400) 0:00:11.448 ******* 2026-03-18 03:17:47.633679 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:17:47.633685 | orchestrator | 2026-03-18 03:17:47.633690 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-18 03:17:47.633695 | orchestrator | Wednesday 18 March 2026 03:17:34 +0000 (0:00:03.216) 0:00:14.664 ******* 2026-03-18 03:17:47.633701 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:17:47.633706 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-18 03:17:47.633711 | orchestrator | 2026-03-18 03:17:47.633717 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-18 03:17:47.633722 | orchestrator | Wednesday 18 March 2026 03:17:38 +0000 (0:00:03.950) 0:00:18.614 ******* 2026-03-18 03:17:47.633727 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:17:47.633733 | orchestrator | 2026-03-18 03:17:47.633738 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-18 03:17:47.633744 | orchestrator | Wednesday 18 March 2026 03:17:41 +0000 (0:00:03.105) 0:00:21.720 ******* 2026-03-18 03:17:47.633749 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-18 03:17:47.633755 | orchestrator | 2026-03-18 03:17:47.633760 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-18 03:17:47.633766 | orchestrator | Wednesday 18 March 2026 03:17:45 +0000 (0:00:03.714) 0:00:25.434 ******* 2026-03-18 03:17:47.633775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:47.633808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:47.633883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:47.633891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:47.633899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:47.633905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:47.633914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:47.633926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:53.941633 | orchestrator | 2026-03-18 03:17:53.941641 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-18 03:17:53.941648 | orchestrator | Wednesday 18 March 2026 03:17:48 +0000 (0:00:02.802) 0:00:28.237 ******* 2026-03-18 03:17:53.941655 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:17:53.941663 | orchestrator | 2026-03-18 03:17:53.941668 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-18 03:17:53.941674 | orchestrator | Wednesday 18 March 2026 03:17:48 +0000 (0:00:00.145) 0:00:28.383 ******* 2026-03-18 03:17:53.941680 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:17:53.941687 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:17:53.941693 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:17:53.941700 | orchestrator | 2026-03-18 03:17:53.941707 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-18 03:17:53.941714 | orchestrator | Wednesday 18 March 2026 03:17:49 +0000 (0:00:00.559) 0:00:28.942 ******* 2026-03-18 03:17:53.941721 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:17:53.941728 | orchestrator | 2026-03-18 03:17:53.941734 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-18 03:17:53.941741 | orchestrator | Wednesday 18 March 2026 03:17:49 +0000 (0:00:00.573) 0:00:29.516 ******* 2026-03-18 03:17:53.941754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:53.941776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:55.774479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:17:55.774565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:55.774728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:56.711009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:56.711114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:56.711131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:17:56.711168 | orchestrator | 2026-03-18 03:17:56.711182 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-18 03:17:56.711194 | orchestrator | Wednesday 18 March 2026 03:17:55 +0000 (0:00:06.087) 0:00:35.603 ******* 2026-03-18 03:17:56.711222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:56.711236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:56.711266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:56.711279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:56.711291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:17:56.711303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:17:56.711321 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:17:56.711339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:56.711351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:56.711363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:56.711382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547672 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:17:57.547714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:57.547730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:57.547743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547889 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:17:57.547901 | orchestrator | 2026-03-18 03:17:57.547914 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-18 03:17:57.547926 | orchestrator | Wednesday 18 March 2026 03:17:56 +0000 (0:00:01.055) 0:00:36.659 ******* 2026-03-18 03:17:57.547944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:57.547956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:57.547968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.547988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934520 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:17:57.934551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:57.934565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:57.934577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934659 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:17:57.934676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:17:57.934688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:17:57.934699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:17:57.934729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:02.390474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:18:02.390574 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:18:02.390586 | orchestrator | 2026-03-18 03:18:02.390593 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-18 03:18:02.390600 | orchestrator | Wednesday 18 March 2026 03:17:57 +0000 (0:00:01.108) 0:00:37.767 ******* 2026-03-18 03:18:02.390620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:02.390628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:02.390635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:02.390671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:02.390730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.218884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.218979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219087 | orchestrator | 2026-03-18 03:18:14.219096 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-18 03:18:14.219103 | orchestrator | Wednesday 18 March 2026 03:18:04 +0000 (0:00:06.234) 0:00:44.002 ******* 2026-03-18 03:18:14.219111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:14.219123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:14.219131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:14.219144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:14.219160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:22.826707 | orchestrator | 2026-03-18 03:18:22.826719 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-18 03:18:22.826731 | orchestrator | Wednesday 18 March 2026 03:18:18 +0000 (0:00:14.785) 0:00:58.787 ******* 2026-03-18 03:18:22.826750 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-18 03:18:27.206012 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-18 03:18:27.206178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-18 03:18:27.206193 | orchestrator | 2026-03-18 03:18:27.206205 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-18 03:18:27.206216 | orchestrator | Wednesday 18 March 2026 03:18:22 +0000 (0:00:03.870) 0:01:02.658 ******* 2026-03-18 03:18:27.206227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-18 03:18:27.206238 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-18 03:18:27.206249 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-18 03:18:27.206259 | orchestrator | 2026-03-18 03:18:27.206270 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-18 03:18:27.206280 | orchestrator | Wednesday 18 March 2026 03:18:25 +0000 (0:00:02.542) 0:01:05.201 ******* 2026-03-18 03:18:27.206311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:27.206353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:27.206366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:27.206397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:27.206410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:27.206427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:27.206449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:27.206463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:27.206475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:27.206487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:27.206507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:30.096054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:30.096193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:30.096206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:30.096214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:30.096221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:30.096229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:30.096250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:30.096258 | orchestrator | 2026-03-18 03:18:30.096266 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-18 03:18:30.096274 | orchestrator | Wednesday 18 March 2026 03:18:28 +0000 (0:00:02.919) 0:01:08.120 ******* 2026-03-18 03:18:30.096290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:30.096299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:30.096306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:30.096313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:30.096325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:31.158600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:31.158717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:31.158858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:31.158879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:31.158898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:31.158932 | orchestrator | 2026-03-18 03:18:31.158954 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-18 03:18:31.158987 | orchestrator | Wednesday 18 March 2026 03:18:31 +0000 (0:00:02.865) 0:01:10.985 ******* 2026-03-18 03:18:32.238418 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:18:32.238537 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:18:32.238548 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:18:32.238555 | orchestrator | 2026-03-18 03:18:32.238563 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-18 03:18:32.238570 | orchestrator | Wednesday 18 March 2026 03:18:31 +0000 (0:00:00.312) 0:01:11.298 ******* 2026-03-18 03:18:32.238590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:32.238624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:18:32.238632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238691 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:18:32.238701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:32.238707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:18:32.238712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:32.238738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:18:35.752785 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:18:35.752919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-18 03:18:35.752932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 03:18:35.752940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 03:18:35.752947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 03:18:35.752955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 03:18:35.752977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:18:35.752984 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:18:35.752991 | orchestrator | 2026-03-18 03:18:35.753009 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-18 03:18:35.753017 | orchestrator | Wednesday 18 March 2026 03:18:32 +0000 (0:00:00.889) 0:01:12.188 ******* 2026-03-18 03:18:35.753027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:35.753034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:35.753040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-18 03:18:35.753051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:35.753061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:18:37.553956 | orchestrator | 2026-03-18 03:18:37.553971 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-18 03:18:37.553985 | orchestrator | Wednesday 18 March 2026 03:18:37 +0000 (0:00:04.893) 0:01:17.082 ******* 2026-03-18 03:18:37.553997 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:18:37.554124 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:20:06.579214 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:20:06.579331 | orchestrator | 2026-03-18 03:20:06.579349 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-18 03:20:06.579361 | orchestrator | Wednesday 18 March 2026 03:18:37 +0000 (0:00:00.303) 0:01:17.385 ******* 2026-03-18 03:20:06.579373 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-18 03:20:06.579384 | orchestrator | 2026-03-18 03:20:06.579394 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-18 03:20:06.579419 | orchestrator | Wednesday 18 March 2026 03:18:39 +0000 (0:00:02.075) 0:01:19.461 ******* 2026-03-18 03:20:06.579426 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-18 03:20:06.579433 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-18 03:20:06.579439 | orchestrator | 2026-03-18 03:20:06.579445 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-18 03:20:06.579451 | orchestrator | Wednesday 18 March 2026 03:18:41 +0000 (0:00:02.215) 0:01:21.676 ******* 2026-03-18 03:20:06.579457 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579463 | orchestrator | 2026-03-18 03:20:06.579469 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-18 03:20:06.579475 | orchestrator | Wednesday 18 March 2026 03:18:58 +0000 (0:00:16.425) 0:01:38.101 ******* 2026-03-18 03:20:06.579480 | orchestrator | 2026-03-18 03:20:06.579486 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-18 03:20:06.579492 | orchestrator | Wednesday 18 March 2026 03:18:58 +0000 (0:00:00.081) 0:01:38.182 ******* 2026-03-18 03:20:06.579498 | orchestrator | 2026-03-18 03:20:06.579504 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-18 03:20:06.579509 | orchestrator | Wednesday 18 March 2026 03:18:58 +0000 (0:00:00.085) 0:01:38.268 ******* 2026-03-18 03:20:06.579515 | orchestrator | 2026-03-18 03:20:06.579521 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-18 03:20:06.579546 | orchestrator | Wednesday 18 March 2026 03:18:58 +0000 (0:00:00.078) 0:01:38.346 ******* 2026-03-18 03:20:06.579556 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579565 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579575 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579585 | orchestrator | 2026-03-18 03:20:06.579594 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-18 03:20:06.579603 | orchestrator | Wednesday 18 March 2026 03:19:11 +0000 (0:00:12.937) 0:01:51.283 ******* 2026-03-18 03:20:06.579612 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579622 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579632 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579641 | orchestrator | 2026-03-18 03:20:06.579651 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-18 03:20:06.579661 | orchestrator | Wednesday 18 March 2026 03:19:20 +0000 (0:00:08.903) 0:02:00.187 ******* 2026-03-18 03:20:06.579671 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579681 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579690 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579700 | orchestrator | 2026-03-18 03:20:06.579709 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-18 03:20:06.579720 | orchestrator | Wednesday 18 March 2026 03:19:30 +0000 (0:00:10.602) 0:02:10.789 ******* 2026-03-18 03:20:06.579769 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579778 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579784 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579790 | orchestrator | 2026-03-18 03:20:06.579795 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-18 03:20:06.579802 | orchestrator | Wednesday 18 March 2026 03:19:41 +0000 (0:00:10.736) 0:02:21.526 ******* 2026-03-18 03:20:06.579807 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579813 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579819 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579824 | orchestrator | 2026-03-18 03:20:06.579830 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-18 03:20:06.579836 | orchestrator | Wednesday 18 March 2026 03:19:47 +0000 (0:00:06.199) 0:02:27.726 ******* 2026-03-18 03:20:06.579842 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579848 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:20:06.579854 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:20:06.579864 | orchestrator | 2026-03-18 03:20:06.579874 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-18 03:20:06.579883 | orchestrator | Wednesday 18 March 2026 03:19:59 +0000 (0:00:11.148) 0:02:38.874 ******* 2026-03-18 03:20:06.579892 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:20:06.579901 | orchestrator | 2026-03-18 03:20:06.579910 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:20:06.579922 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:20:06.579934 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:20:06.579944 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:20:06.579954 | orchestrator | 2026-03-18 03:20:06.579963 | orchestrator | 2026-03-18 03:20:06.579973 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:20:06.579979 | orchestrator | Wednesday 18 March 2026 03:20:06 +0000 (0:00:07.088) 0:02:45.963 ******* 2026-03-18 03:20:06.579985 | orchestrator | =============================================================================== 2026-03-18 03:20:06.579990 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.43s 2026-03-18 03:20:06.580003 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.79s 2026-03-18 03:20:06.580025 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.94s 2026-03-18 03:20:06.580031 | orchestrator | designate : Restart designate-worker container ------------------------- 11.15s 2026-03-18 03:20:06.580037 | orchestrator | designate : Restart designate-producer container ----------------------- 10.74s 2026-03-18 03:20:06.580042 | orchestrator | designate : Restart designate-central container ------------------------ 10.60s 2026-03-18 03:20:06.580048 | orchestrator | designate : Restart designate-api container ----------------------------- 8.90s 2026-03-18 03:20:06.580063 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.09s 2026-03-18 03:20:06.580073 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.40s 2026-03-18 03:20:06.580083 | orchestrator | designate : Copying over config.json files for services ----------------- 6.23s 2026-03-18 03:20:06.580092 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.20s 2026-03-18 03:20:06.580102 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.09s 2026-03-18 03:20:06.580112 | orchestrator | designate : Check designate containers ---------------------------------- 4.89s 2026-03-18 03:20:06.580122 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.95s 2026-03-18 03:20:06.580131 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.87s 2026-03-18 03:20:06.580141 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.71s 2026-03-18 03:20:06.580150 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.33s 2026-03-18 03:20:06.580160 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.22s 2026-03-18 03:20:06.580169 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.11s 2026-03-18 03:20:06.580178 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.92s 2026-03-18 03:20:09.179625 | orchestrator | 2026-03-18 03:20:09 | INFO  | Task bf45ce78-4743-4e1f-8878-01f0450f6cca (octavia) was prepared for execution. 2026-03-18 03:20:09.179719 | orchestrator | 2026-03-18 03:20:09 | INFO  | It takes a moment until task bf45ce78-4743-4e1f-8878-01f0450f6cca (octavia) has been started and output is visible here. 2026-03-18 03:22:16.413358 | orchestrator | 2026-03-18 03:22:16.413480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:22:16.413493 | orchestrator | 2026-03-18 03:22:16.413502 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:22:16.413546 | orchestrator | Wednesday 18 March 2026 03:20:13 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-03-18 03:22:16.413555 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:16.413565 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:22:16.413573 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:22:16.413580 | orchestrator | 2026-03-18 03:22:16.413589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:22:16.413596 | orchestrator | Wednesday 18 March 2026 03:20:14 +0000 (0:00:00.376) 0:00:00.654 ******* 2026-03-18 03:22:16.413604 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-18 03:22:16.413612 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-18 03:22:16.413619 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-18 03:22:16.413627 | orchestrator | 2026-03-18 03:22:16.413634 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-18 03:22:16.413641 | orchestrator | 2026-03-18 03:22:16.413691 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:22:16.413699 | orchestrator | Wednesday 18 March 2026 03:20:14 +0000 (0:00:00.524) 0:00:01.179 ******* 2026-03-18 03:22:16.413707 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:22:16.413734 | orchestrator | 2026-03-18 03:22:16.413742 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-18 03:22:16.413749 | orchestrator | Wednesday 18 March 2026 03:20:15 +0000 (0:00:00.607) 0:00:01.786 ******* 2026-03-18 03:22:16.413757 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-18 03:22:16.413764 | orchestrator | 2026-03-18 03:22:16.413771 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-18 03:22:16.413779 | orchestrator | Wednesday 18 March 2026 03:20:18 +0000 (0:00:03.354) 0:00:05.141 ******* 2026-03-18 03:22:16.413786 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-18 03:22:16.413794 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-18 03:22:16.413801 | orchestrator | 2026-03-18 03:22:16.413808 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-18 03:22:16.413817 | orchestrator | Wednesday 18 March 2026 03:20:25 +0000 (0:00:06.402) 0:00:11.544 ******* 2026-03-18 03:22:16.413829 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:22:16.413841 | orchestrator | 2026-03-18 03:22:16.413853 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-18 03:22:16.413865 | orchestrator | Wednesday 18 March 2026 03:20:28 +0000 (0:00:03.310) 0:00:14.854 ******* 2026-03-18 03:22:16.413877 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:22:16.413889 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-18 03:22:16.413901 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-18 03:22:16.413913 | orchestrator | 2026-03-18 03:22:16.413925 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-18 03:22:16.413938 | orchestrator | Wednesday 18 March 2026 03:20:36 +0000 (0:00:08.259) 0:00:23.113 ******* 2026-03-18 03:22:16.413951 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:22:16.413963 | orchestrator | 2026-03-18 03:22:16.413976 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-18 03:22:16.413988 | orchestrator | Wednesday 18 March 2026 03:20:40 +0000 (0:00:03.234) 0:00:26.348 ******* 2026-03-18 03:22:16.413999 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-18 03:22:16.414013 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-18 03:22:16.414074 | orchestrator | 2026-03-18 03:22:16.414093 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-18 03:22:16.414101 | orchestrator | Wednesday 18 March 2026 03:20:47 +0000 (0:00:07.258) 0:00:33.607 ******* 2026-03-18 03:22:16.414131 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-18 03:22:16.414139 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-18 03:22:16.414147 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-18 03:22:16.414154 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-18 03:22:16.414161 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-18 03:22:16.414168 | orchestrator | 2026-03-18 03:22:16.414175 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:22:16.414183 | orchestrator | Wednesday 18 March 2026 03:21:02 +0000 (0:00:15.305) 0:00:48.912 ******* 2026-03-18 03:22:16.414190 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:22:16.414197 | orchestrator | 2026-03-18 03:22:16.414204 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-18 03:22:16.414211 | orchestrator | Wednesday 18 March 2026 03:21:03 +0000 (0:00:00.823) 0:00:49.735 ******* 2026-03-18 03:22:16.414219 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414226 | orchestrator | 2026-03-18 03:22:16.414233 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-18 03:22:16.414248 | orchestrator | Wednesday 18 March 2026 03:21:08 +0000 (0:00:05.161) 0:00:54.897 ******* 2026-03-18 03:22:16.414256 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414263 | orchestrator | 2026-03-18 03:22:16.414271 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-18 03:22:16.414293 | orchestrator | Wednesday 18 March 2026 03:21:13 +0000 (0:00:04.701) 0:00:59.598 ******* 2026-03-18 03:22:16.414301 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:16.414308 | orchestrator | 2026-03-18 03:22:16.414316 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-18 03:22:16.414323 | orchestrator | Wednesday 18 March 2026 03:21:16 +0000 (0:00:03.284) 0:01:02.883 ******* 2026-03-18 03:22:16.414330 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-18 03:22:16.414338 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-18 03:22:16.414345 | orchestrator | 2026-03-18 03:22:16.414352 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-18 03:22:16.414359 | orchestrator | Wednesday 18 March 2026 03:21:26 +0000 (0:00:09.649) 0:01:12.533 ******* 2026-03-18 03:22:16.414367 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-18 03:22:16.414375 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-18 03:22:16.414384 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-18 03:22:16.414393 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-18 03:22:16.414401 | orchestrator | 2026-03-18 03:22:16.414408 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-18 03:22:16.414415 | orchestrator | Wednesday 18 March 2026 03:21:42 +0000 (0:00:16.753) 0:01:29.287 ******* 2026-03-18 03:22:16.414423 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414430 | orchestrator | 2026-03-18 03:22:16.414440 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-18 03:22:16.414448 | orchestrator | Wednesday 18 March 2026 03:21:47 +0000 (0:00:04.690) 0:01:33.977 ******* 2026-03-18 03:22:16.414455 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414462 | orchestrator | 2026-03-18 03:22:16.414469 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-18 03:22:16.414477 | orchestrator | Wednesday 18 March 2026 03:21:53 +0000 (0:00:05.796) 0:01:39.773 ******* 2026-03-18 03:22:16.414484 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:16.414491 | orchestrator | 2026-03-18 03:22:16.414498 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-18 03:22:16.414505 | orchestrator | Wednesday 18 March 2026 03:21:53 +0000 (0:00:00.220) 0:01:39.993 ******* 2026-03-18 03:22:16.414512 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:16.414520 | orchestrator | 2026-03-18 03:22:16.414527 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:22:16.414534 | orchestrator | Wednesday 18 March 2026 03:21:58 +0000 (0:00:04.385) 0:01:44.378 ******* 2026-03-18 03:22:16.414541 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:22:16.414549 | orchestrator | 2026-03-18 03:22:16.414556 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-18 03:22:16.414564 | orchestrator | Wednesday 18 March 2026 03:21:59 +0000 (0:00:01.235) 0:01:45.614 ******* 2026-03-18 03:22:16.414571 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414578 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414585 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414593 | orchestrator | 2026-03-18 03:22:16.414600 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-18 03:22:16.414612 | orchestrator | Wednesday 18 March 2026 03:22:04 +0000 (0:00:04.767) 0:01:50.382 ******* 2026-03-18 03:22:16.414620 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414627 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414634 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414641 | orchestrator | 2026-03-18 03:22:16.414777 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-18 03:22:16.414797 | orchestrator | Wednesday 18 March 2026 03:22:08 +0000 (0:00:04.717) 0:01:55.099 ******* 2026-03-18 03:22:16.414805 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414812 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414819 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414826 | orchestrator | 2026-03-18 03:22:16.414834 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-18 03:22:16.414841 | orchestrator | Wednesday 18 March 2026 03:22:09 +0000 (0:00:01.047) 0:01:56.146 ******* 2026-03-18 03:22:16.414848 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:16.414855 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:22:16.414862 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:22:16.414869 | orchestrator | 2026-03-18 03:22:16.414877 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-18 03:22:16.414884 | orchestrator | Wednesday 18 March 2026 03:22:11 +0000 (0:00:01.748) 0:01:57.894 ******* 2026-03-18 03:22:16.414891 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414898 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414905 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414912 | orchestrator | 2026-03-18 03:22:16.414919 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-18 03:22:16.414927 | orchestrator | Wednesday 18 March 2026 03:22:12 +0000 (0:00:01.263) 0:01:59.158 ******* 2026-03-18 03:22:16.414934 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414941 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414948 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414955 | orchestrator | 2026-03-18 03:22:16.414962 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-18 03:22:16.414969 | orchestrator | Wednesday 18 March 2026 03:22:14 +0000 (0:00:01.234) 0:02:00.393 ******* 2026-03-18 03:22:16.414976 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:16.414984 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:16.414991 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:16.414998 | orchestrator | 2026-03-18 03:22:16.415015 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-18 03:22:42.360669 | orchestrator | Wednesday 18 March 2026 03:22:16 +0000 (0:00:02.289) 0:02:02.683 ******* 2026-03-18 03:22:42.360794 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:22:42.360833 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:22:42.360849 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:22:42.360865 | orchestrator | 2026-03-18 03:22:42.360883 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-18 03:22:42.360900 | orchestrator | Wednesday 18 March 2026 03:22:18 +0000 (0:00:01.674) 0:02:04.357 ******* 2026-03-18 03:22:42.360915 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.360933 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:22:42.360949 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:22:42.360965 | orchestrator | 2026-03-18 03:22:42.360982 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-18 03:22:42.360998 | orchestrator | Wednesday 18 March 2026 03:22:18 +0000 (0:00:00.718) 0:02:05.076 ******* 2026-03-18 03:22:42.361013 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.361031 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:22:42.361046 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:22:42.361061 | orchestrator | 2026-03-18 03:22:42.361076 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:22:42.361086 | orchestrator | Wednesday 18 March 2026 03:22:22 +0000 (0:00:03.232) 0:02:08.308 ******* 2026-03-18 03:22:42.361123 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:22:42.361134 | orchestrator | 2026-03-18 03:22:42.361144 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-18 03:22:42.361155 | orchestrator | Wednesday 18 March 2026 03:22:22 +0000 (0:00:00.719) 0:02:09.028 ******* 2026-03-18 03:22:42.361167 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.361178 | orchestrator | 2026-03-18 03:22:42.361188 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-18 03:22:42.361200 | orchestrator | Wednesday 18 March 2026 03:22:26 +0000 (0:00:03.659) 0:02:12.687 ******* 2026-03-18 03:22:42.361211 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.361221 | orchestrator | 2026-03-18 03:22:42.361232 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-18 03:22:42.361243 | orchestrator | Wednesday 18 March 2026 03:22:29 +0000 (0:00:03.208) 0:02:15.896 ******* 2026-03-18 03:22:42.361254 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-18 03:22:42.361265 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-18 03:22:42.361276 | orchestrator | 2026-03-18 03:22:42.361287 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-18 03:22:42.361298 | orchestrator | Wednesday 18 March 2026 03:22:36 +0000 (0:00:06.633) 0:02:22.529 ******* 2026-03-18 03:22:42.361309 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.361321 | orchestrator | 2026-03-18 03:22:42.361331 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-18 03:22:42.361342 | orchestrator | Wednesday 18 March 2026 03:22:39 +0000 (0:00:03.464) 0:02:25.994 ******* 2026-03-18 03:22:42.361353 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:22:42.361465 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:22:42.361483 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:22:42.361499 | orchestrator | 2026-03-18 03:22:42.361515 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-18 03:22:42.361532 | orchestrator | Wednesday 18 March 2026 03:22:40 +0000 (0:00:00.522) 0:02:26.516 ******* 2026-03-18 03:22:42.361573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:42.361625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:42.361687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:42.361707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:42.361725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:42.361743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:42.361770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:42.361783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:42.361811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:43.919995 | orchestrator | 2026-03-18 03:22:43.920005 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-18 03:22:43.920015 | orchestrator | Wednesday 18 March 2026 03:22:42 +0000 (0:00:02.563) 0:02:29.079 ******* 2026-03-18 03:22:43.920023 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:43.920033 | orchestrator | 2026-03-18 03:22:43.920042 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-18 03:22:43.920050 | orchestrator | Wednesday 18 March 2026 03:22:42 +0000 (0:00:00.149) 0:02:29.228 ******* 2026-03-18 03:22:43.920058 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:43.920081 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:22:43.920090 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:22:43.920098 | orchestrator | 2026-03-18 03:22:43.920106 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-18 03:22:43.920114 | orchestrator | Wednesday 18 March 2026 03:22:43 +0000 (0:00:00.345) 0:02:29.573 ******* 2026-03-18 03:22:43.920123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:43.920134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:43.920148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:43.920158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:43.920172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:43.920181 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:43.920198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:48.967425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:48.967533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:48.967550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:48.967579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:48.967615 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:22:48.967691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:48.967705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:48.967737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:48.967751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:48.967762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:48.967774 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:22:48.967786 | orchestrator | 2026-03-18 03:22:48.967798 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:22:48.967816 | orchestrator | Wednesday 18 March 2026 03:22:44 +0000 (0:00:00.745) 0:02:30.318 ******* 2026-03-18 03:22:48.967837 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:22:48.967848 | orchestrator | 2026-03-18 03:22:48.967890 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-18 03:22:48.967902 | orchestrator | Wednesday 18 March 2026 03:22:44 +0000 (0:00:00.837) 0:02:31.156 ******* 2026-03-18 03:22:48.967914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:48.967927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:48.967948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:50.539523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:50.539671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:50.539715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:50.539725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:22:50.539816 | orchestrator | 2026-03-18 03:22:50.539824 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-18 03:22:50.539833 | orchestrator | Wednesday 18 March 2026 03:22:49 +0000 (0:00:05.056) 0:02:36.213 ******* 2026-03-18 03:22:50.539846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:50.646892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:50.647031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:50.647048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:50.647061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:50.647073 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:50.647086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:50.647099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:50.647128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:50.647151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:50.647162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:50.647172 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:22:50.647183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:50.647193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:50.647204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:50.647222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:51.482834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:51.482923 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:22:51.482935 | orchestrator | 2026-03-18 03:22:51.482944 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-18 03:22:51.482952 | orchestrator | Wednesday 18 March 2026 03:22:50 +0000 (0:00:00.715) 0:02:36.928 ******* 2026-03-18 03:22:51.482961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:51.482971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:51.482981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:51.482989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:51.483033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:51.483042 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:22:51.483054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:51.483062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:51.483070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:51.483077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:51.483085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:51.483098 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:22:51.483131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 03:22:56.095330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 03:22:56.095463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 03:22:56.095491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 03:22:56.095523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 03:22:56.095541 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:22:56.095554 | orchestrator | 2026-03-18 03:22:56.095566 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-18 03:22:56.095601 | orchestrator | Wednesday 18 March 2026 03:22:51 +0000 (0:00:01.347) 0:02:38.276 ******* 2026-03-18 03:22:56.095613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:56.095681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:56.095695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:22:56.095706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:56.095717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:56.095734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:22:56.095745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:22:56.095762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:12.569845 | orchestrator | 2026-03-18 03:23:12.569856 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-18 03:23:12.569869 | orchestrator | Wednesday 18 March 2026 03:22:57 +0000 (0:00:05.091) 0:02:43.368 ******* 2026-03-18 03:23:12.569880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-18 03:23:12.569892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-18 03:23:12.569902 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-18 03:23:12.569910 | orchestrator | 2026-03-18 03:23:12.569916 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-18 03:23:12.569922 | orchestrator | Wednesday 18 March 2026 03:22:58 +0000 (0:00:01.720) 0:02:45.088 ******* 2026-03-18 03:23:12.569930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:12.569944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:12.569951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:12.569963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:28.571656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:28.571784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:28.571811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:28.571998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:28.572033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:28.572053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:23:28.572073 | orchestrator | 2026-03-18 03:23:28.572094 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-18 03:23:28.572115 | orchestrator | Wednesday 18 March 2026 03:23:16 +0000 (0:00:17.478) 0:03:02.567 ******* 2026-03-18 03:23:28.572134 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:23:28.572156 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:23:28.572176 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:23:28.572196 | orchestrator | 2026-03-18 03:23:28.572218 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-18 03:23:28.572238 | orchestrator | Wednesday 18 March 2026 03:23:18 +0000 (0:00:01.791) 0:03:04.358 ******* 2026-03-18 03:23:28.572258 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-18 03:23:28.572279 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-18 03:23:28.572293 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-18 03:23:28.572303 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-18 03:23:28.572314 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-18 03:23:28.572325 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-18 03:23:28.572335 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-18 03:23:28.572346 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-18 03:23:28.572357 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-18 03:23:28.572367 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-18 03:23:28.572378 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-18 03:23:28.572388 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-18 03:23:28.572399 | orchestrator | 2026-03-18 03:23:28.572410 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-18 03:23:28.572421 | orchestrator | Wednesday 18 March 2026 03:23:23 +0000 (0:00:05.230) 0:03:09.589 ******* 2026-03-18 03:23:28.572432 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-18 03:23:28.572443 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-18 03:23:28.572469 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-18 03:23:37.179022 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179148 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179171 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179217 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179234 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179248 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179263 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179278 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179294 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179312 | orchestrator | 2026-03-18 03:23:37.179329 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-18 03:23:37.179340 | orchestrator | Wednesday 18 March 2026 03:23:28 +0000 (0:00:05.254) 0:03:14.843 ******* 2026-03-18 03:23:37.179350 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-18 03:23:37.179360 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-18 03:23:37.179369 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-18 03:23:37.179379 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179392 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179412 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-18 03:23:37.179435 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179450 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179466 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-18 03:23:37.179481 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179498 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179514 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-18 03:23:37.179530 | orchestrator | 2026-03-18 03:23:37.179546 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-18 03:23:37.179557 | orchestrator | Wednesday 18 March 2026 03:23:33 +0000 (0:00:05.311) 0:03:20.155 ******* 2026-03-18 03:23:37.179572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:37.179589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:37.179689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 03:23:37.179705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:37.179719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:37.179730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-18 03:23:37.179743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:37.179755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:37.179766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-18 03:23:37.179796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-18 03:24:59.073779 | orchestrator | 2026-03-18 03:24:59.073789 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-18 03:24:59.073798 | orchestrator | Wednesday 18 March 2026 03:23:37 +0000 (0:00:04.015) 0:03:24.171 ******* 2026-03-18 03:24:59.073806 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:24:59.073814 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:24:59.073821 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:24:59.073829 | orchestrator | 2026-03-18 03:24:59.073836 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-18 03:24:59.073843 | orchestrator | Wednesday 18 March 2026 03:23:38 +0000 (0:00:00.592) 0:03:24.764 ******* 2026-03-18 03:24:59.073850 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.073857 | orchestrator | 2026-03-18 03:24:59.073864 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-18 03:24:59.073884 | orchestrator | Wednesday 18 March 2026 03:23:40 +0000 (0:00:02.012) 0:03:26.776 ******* 2026-03-18 03:24:59.073891 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.073898 | orchestrator | 2026-03-18 03:24:59.073905 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-18 03:24:59.073912 | orchestrator | Wednesday 18 March 2026 03:23:42 +0000 (0:00:02.124) 0:03:28.901 ******* 2026-03-18 03:24:59.073919 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.073926 | orchestrator | 2026-03-18 03:24:59.073933 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-18 03:24:59.073941 | orchestrator | Wednesday 18 March 2026 03:23:44 +0000 (0:00:02.289) 0:03:31.190 ******* 2026-03-18 03:24:59.073962 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.073970 | orchestrator | 2026-03-18 03:24:59.073978 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-18 03:24:59.073985 | orchestrator | Wednesday 18 March 2026 03:23:47 +0000 (0:00:02.320) 0:03:33.511 ******* 2026-03-18 03:24:59.073992 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.073999 | orchestrator | 2026-03-18 03:24:59.074006 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-18 03:24:59.074067 | orchestrator | Wednesday 18 March 2026 03:24:09 +0000 (0:00:22.461) 0:03:55.972 ******* 2026-03-18 03:24:59.074076 | orchestrator | 2026-03-18 03:24:59.074083 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-18 03:24:59.074090 | orchestrator | Wednesday 18 March 2026 03:24:09 +0000 (0:00:00.087) 0:03:56.060 ******* 2026-03-18 03:24:59.074097 | orchestrator | 2026-03-18 03:24:59.074107 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-18 03:24:59.074115 | orchestrator | Wednesday 18 March 2026 03:24:09 +0000 (0:00:00.076) 0:03:56.136 ******* 2026-03-18 03:24:59.074123 | orchestrator | 2026-03-18 03:24:59.074131 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-18 03:24:59.074140 | orchestrator | Wednesday 18 March 2026 03:24:09 +0000 (0:00:00.069) 0:03:56.205 ******* 2026-03-18 03:24:59.074148 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.074157 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:24:59.074165 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:24:59.074173 | orchestrator | 2026-03-18 03:24:59.074181 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-18 03:24:59.074189 | orchestrator | Wednesday 18 March 2026 03:24:21 +0000 (0:00:11.579) 0:04:07.785 ******* 2026-03-18 03:24:59.074198 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.074206 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:24:59.074214 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:24:59.074222 | orchestrator | 2026-03-18 03:24:59.074237 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-18 03:24:59.074246 | orchestrator | Wednesday 18 March 2026 03:24:32 +0000 (0:00:11.263) 0:04:19.049 ******* 2026-03-18 03:24:59.074254 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.074263 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:24:59.074271 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:24:59.074279 | orchestrator | 2026-03-18 03:24:59.074288 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-18 03:24:59.074296 | orchestrator | Wednesday 18 March 2026 03:24:38 +0000 (0:00:05.337) 0:04:24.386 ******* 2026-03-18 03:24:59.074305 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:24:59.074312 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.074319 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:24:59.074326 | orchestrator | 2026-03-18 03:24:59.074333 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-18 03:24:59.074341 | orchestrator | Wednesday 18 March 2026 03:24:48 +0000 (0:00:10.368) 0:04:34.755 ******* 2026-03-18 03:24:59.074348 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:24:59.074355 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:24:59.074363 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:24:59.074370 | orchestrator | 2026-03-18 03:24:59.074377 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:24:59.074385 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:24:59.074394 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:24:59.074402 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:24:59.074409 | orchestrator | 2026-03-18 03:24:59.074416 | orchestrator | 2026-03-18 03:24:59.074423 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:24:59.074430 | orchestrator | Wednesday 18 March 2026 03:24:59 +0000 (0:00:10.577) 0:04:45.333 ******* 2026-03-18 03:24:59.074438 | orchestrator | =============================================================================== 2026-03-18 03:24:59.074445 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.46s 2026-03-18 03:24:59.074452 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.48s 2026-03-18 03:24:59.074459 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.75s 2026-03-18 03:24:59.074466 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.31s 2026-03-18 03:24:59.074474 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.58s 2026-03-18 03:24:59.074481 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.26s 2026-03-18 03:24:59.074488 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.58s 2026-03-18 03:24:59.074496 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.37s 2026-03-18 03:24:59.074513 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.65s 2026-03-18 03:24:59.074524 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.26s 2026-03-18 03:24:59.074536 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.26s 2026-03-18 03:24:59.074563 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.63s 2026-03-18 03:24:59.074576 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.40s 2026-03-18 03:24:59.074587 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.80s 2026-03-18 03:24:59.074605 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.34s 2026-03-18 03:24:59.450335 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.31s 2026-03-18 03:24:59.450423 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.25s 2026-03-18 03:24:59.450429 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.23s 2026-03-18 03:24:59.450433 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.16s 2026-03-18 03:24:59.450437 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.09s 2026-03-18 03:25:01.965276 | orchestrator | 2026-03-18 03:25:01 | INFO  | Task 476f31ba-63f5-42aa-8f4d-9ae472df81cd (ceilometer) was prepared for execution. 2026-03-18 03:25:01.965393 | orchestrator | 2026-03-18 03:25:01 | INFO  | It takes a moment until task 476f31ba-63f5-42aa-8f4d-9ae472df81cd (ceilometer) has been started and output is visible here. 2026-03-18 03:25:25.610330 | orchestrator | 2026-03-18 03:25:25.610421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:25:25.610431 | orchestrator | 2026-03-18 03:25:25.610439 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:25:25.610447 | orchestrator | Wednesday 18 March 2026 03:25:06 +0000 (0:00:00.279) 0:00:00.279 ******* 2026-03-18 03:25:25.610455 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:25:25.610464 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:25:25.610471 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:25:25.610478 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:25:25.610486 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:25:25.610494 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:25:25.610501 | orchestrator | 2026-03-18 03:25:25.610508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:25:25.610516 | orchestrator | Wednesday 18 March 2026 03:25:07 +0000 (0:00:00.686) 0:00:00.965 ******* 2026-03-18 03:25:25.610524 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610557 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610566 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610573 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610580 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610587 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-18 03:25:25.610594 | orchestrator | 2026-03-18 03:25:25.610602 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-18 03:25:25.610609 | orchestrator | 2026-03-18 03:25:25.610616 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-18 03:25:25.610623 | orchestrator | Wednesday 18 March 2026 03:25:07 +0000 (0:00:00.573) 0:00:01.539 ******* 2026-03-18 03:25:25.610631 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:25:25.610640 | orchestrator | 2026-03-18 03:25:25.610647 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-18 03:25:25.610720 | orchestrator | Wednesday 18 March 2026 03:25:08 +0000 (0:00:01.141) 0:00:02.680 ******* 2026-03-18 03:25:25.610730 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:25.610737 | orchestrator | 2026-03-18 03:25:25.610745 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-18 03:25:25.610752 | orchestrator | Wednesday 18 March 2026 03:25:09 +0000 (0:00:00.124) 0:00:02.805 ******* 2026-03-18 03:25:25.610759 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:25.610767 | orchestrator | 2026-03-18 03:25:25.610774 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-18 03:25:25.610781 | orchestrator | Wednesday 18 March 2026 03:25:09 +0000 (0:00:00.125) 0:00:02.931 ******* 2026-03-18 03:25:25.610788 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:25:25.610796 | orchestrator | 2026-03-18 03:25:25.610803 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-18 03:25:25.610831 | orchestrator | Wednesday 18 March 2026 03:25:12 +0000 (0:00:03.559) 0:00:06.490 ******* 2026-03-18 03:25:25.610838 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:25:25.610846 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-18 03:25:25.610853 | orchestrator | 2026-03-18 03:25:25.610860 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-18 03:25:25.610867 | orchestrator | Wednesday 18 March 2026 03:25:16 +0000 (0:00:03.958) 0:00:10.449 ******* 2026-03-18 03:25:25.610875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:25:25.610883 | orchestrator | 2026-03-18 03:25:25.610891 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-18 03:25:25.610899 | orchestrator | Wednesday 18 March 2026 03:25:19 +0000 (0:00:03.188) 0:00:13.637 ******* 2026-03-18 03:25:25.610908 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-18 03:25:25.610915 | orchestrator | 2026-03-18 03:25:25.610923 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-18 03:25:25.610945 | orchestrator | Wednesday 18 March 2026 03:25:23 +0000 (0:00:04.079) 0:00:17.717 ******* 2026-03-18 03:25:25.610953 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:25.610961 | orchestrator | 2026-03-18 03:25:25.610969 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-18 03:25:25.610977 | orchestrator | Wednesday 18 March 2026 03:25:24 +0000 (0:00:00.152) 0:00:17.870 ******* 2026-03-18 03:25:25.610989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:25.611017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:25.611027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:25.611037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:25.611054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:25.611065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:25.611074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:25.611089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:30.604941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:30.605034 | orchestrator | 2026-03-18 03:25:30.605045 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-18 03:25:30.605054 | orchestrator | Wednesday 18 March 2026 03:25:25 +0000 (0:00:01.475) 0:00:19.345 ******* 2026-03-18 03:25:30.605060 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:25:30.605068 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:25:30.605090 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:25:30.605096 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:25:30.605102 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:25:30.605108 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:25:30.605114 | orchestrator | 2026-03-18 03:25:30.605121 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-18 03:25:30.605128 | orchestrator | Wednesday 18 March 2026 03:25:27 +0000 (0:00:01.729) 0:00:21.075 ******* 2026-03-18 03:25:30.605134 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:25:30.605141 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:25:30.605147 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:25:30.605154 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:25:30.605159 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:25:30.605166 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:25:30.605172 | orchestrator | 2026-03-18 03:25:30.605178 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-18 03:25:30.605184 | orchestrator | Wednesday 18 March 2026 03:25:27 +0000 (0:00:00.644) 0:00:21.720 ******* 2026-03-18 03:25:30.605190 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:30.605196 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:30.605202 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:30.605208 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:30.605215 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:30.605221 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:30.605227 | orchestrator | 2026-03-18 03:25:30.605234 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-18 03:25:30.605241 | orchestrator | Wednesday 18 March 2026 03:25:28 +0000 (0:00:00.810) 0:00:22.530 ******* 2026-03-18 03:25:30.605247 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:25:30.605253 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:25:30.605259 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:25:30.605265 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:25:30.605301 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:25:30.605308 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:25:30.605314 | orchestrator | 2026-03-18 03:25:30.605320 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-18 03:25:30.605326 | orchestrator | Wednesday 18 March 2026 03:25:29 +0000 (0:00:00.634) 0:00:23.165 ******* 2026-03-18 03:25:30.605337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:30.605345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:30.605352 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:30.605374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:30.605387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:30.605394 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:30.605400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:30.605407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:30.605417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:30.605425 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:30.605431 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:30.605437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:30.605448 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:30.605460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.585886 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:35.585988 | orchestrator | 2026-03-18 03:25:35.585999 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-18 03:25:35.586009 | orchestrator | Wednesday 18 March 2026 03:25:30 +0000 (0:00:01.179) 0:00:24.344 ******* 2026-03-18 03:25:35.586073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:35.586095 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:35.586103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:35.586136 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:35.586143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:35.586179 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:35.586204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586213 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:35.586221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586228 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:35.586235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:35.586247 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:35.586254 | orchestrator | 2026-03-18 03:25:35.586261 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-18 03:25:35.586269 | orchestrator | Wednesday 18 March 2026 03:25:31 +0000 (0:00:00.920) 0:00:25.265 ******* 2026-03-18 03:25:35.586276 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:25:35.586288 | orchestrator | 2026-03-18 03:25:35.586296 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-18 03:25:35.586303 | orchestrator | Wednesday 18 March 2026 03:25:32 +0000 (0:00:00.723) 0:00:25.988 ******* 2026-03-18 03:25:35.586310 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:25:35.586320 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:25:35.586326 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:25:35.586334 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:25:35.586341 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:25:35.586351 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:25:35.586360 | orchestrator | 2026-03-18 03:25:35.586369 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-18 03:25:35.586377 | orchestrator | Wednesday 18 March 2026 03:25:33 +0000 (0:00:00.847) 0:00:26.835 ******* 2026-03-18 03:25:35.586386 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:25:35.586393 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:25:35.586401 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:25:35.586410 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:25:35.586417 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:25:35.586424 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:25:35.586433 | orchestrator | 2026-03-18 03:25:35.586441 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-18 03:25:35.586449 | orchestrator | Wednesday 18 March 2026 03:25:34 +0000 (0:00:00.974) 0:00:27.810 ******* 2026-03-18 03:25:35.586458 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:35.586466 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:35.586475 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:35.586483 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:35.586491 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:35.586499 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:35.586508 | orchestrator | 2026-03-18 03:25:35.586516 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-18 03:25:35.586549 | orchestrator | Wednesday 18 March 2026 03:25:34 +0000 (0:00:00.865) 0:00:28.675 ******* 2026-03-18 03:25:35.586557 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:35.586563 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:35.586569 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:35.586576 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:35.586583 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:35.586591 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:35.586599 | orchestrator | 2026-03-18 03:25:40.779931 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-18 03:25:40.780015 | orchestrator | Wednesday 18 March 2026 03:25:35 +0000 (0:00:00.651) 0:00:29.326 ******* 2026-03-18 03:25:40.780024 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:25:40.780030 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:25:40.780046 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:25:40.780052 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:25:40.780064 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:25:40.780073 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:25:40.780082 | orchestrator | 2026-03-18 03:25:40.780091 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-18 03:25:40.780100 | orchestrator | Wednesday 18 March 2026 03:25:37 +0000 (0:00:01.552) 0:00:30.878 ******* 2026-03-18 03:25:40.780113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:40.780153 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:40.780171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:40.780182 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:40.780187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:40.780212 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:40.780218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780228 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:40.780234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780239 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:40.780247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:40.780253 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:40.780258 | orchestrator | 2026-03-18 03:25:40.780263 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-18 03:25:40.780268 | orchestrator | Wednesday 18 March 2026 03:25:37 +0000 (0:00:00.846) 0:00:31.725 ******* 2026-03-18 03:25:40.780273 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:40.780278 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:40.780283 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:40.780288 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:40.780293 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:40.780298 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:40.780303 | orchestrator | 2026-03-18 03:25:40.780309 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-18 03:25:40.780313 | orchestrator | Wednesday 18 March 2026 03:25:38 +0000 (0:00:00.844) 0:00:32.569 ******* 2026-03-18 03:25:40.780319 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:25:40.780344 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:25:40.780349 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:25:40.780354 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:25:40.780359 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:25:40.780364 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:25:40.780369 | orchestrator | 2026-03-18 03:25:40.780374 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-18 03:25:40.780379 | orchestrator | Wednesday 18 March 2026 03:25:40 +0000 (0:00:01.450) 0:00:34.019 ******* 2026-03-18 03:25:40.780390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.963915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:46.964022 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:46.964042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.964059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:46.964079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.964099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:46.964119 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:46.964138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.964187 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:46.964207 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:46.964250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.964270 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:46.964290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:46.964308 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:46.964326 | orchestrator | 2026-03-18 03:25:46.964345 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-18 03:25:46.964365 | orchestrator | Wednesday 18 March 2026 03:25:41 +0000 (0:00:01.241) 0:00:35.260 ******* 2026-03-18 03:25:46.964383 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:46.964401 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:46.964418 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:46.964434 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:46.964452 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:46.964470 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:46.964486 | orchestrator | 2026-03-18 03:25:46.964504 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-18 03:25:46.964568 | orchestrator | Wednesday 18 March 2026 03:25:42 +0000 (0:00:00.885) 0:00:36.146 ******* 2026-03-18 03:25:46.964588 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:46.964604 | orchestrator | 2026-03-18 03:25:46.964621 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-18 03:25:46.964638 | orchestrator | Wednesday 18 March 2026 03:25:42 +0000 (0:00:00.148) 0:00:36.294 ******* 2026-03-18 03:25:46.964655 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:46.964671 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:46.964689 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:46.964706 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:46.964723 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:46.964740 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:46.964757 | orchestrator | 2026-03-18 03:25:46.964773 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-18 03:25:46.964791 | orchestrator | Wednesday 18 March 2026 03:25:43 +0000 (0:00:00.654) 0:00:36.949 ******* 2026-03-18 03:25:46.964809 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:25:46.964827 | orchestrator | 2026-03-18 03:25:46.964845 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-18 03:25:46.964878 | orchestrator | Wednesday 18 March 2026 03:25:44 +0000 (0:00:01.434) 0:00:38.384 ******* 2026-03-18 03:25:46.964896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:46.964928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:47.522592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:47.522703 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:47.522721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:47.522734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:47.522769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:47.522782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:47.522812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:47.522826 | orchestrator | 2026-03-18 03:25:47.522839 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-18 03:25:47.522851 | orchestrator | Wednesday 18 March 2026 03:25:46 +0000 (0:00:02.318) 0:00:40.702 ******* 2026-03-18 03:25:47.522864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:47.522876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:47.522888 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:47.522900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:47.522920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:47.522931 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:47.522942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:47.522976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:49.502384 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:49.502488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502507 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:49.502638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502699 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:49.502721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502741 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:49.502760 | orchestrator | 2026-03-18 03:25:49.502781 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-18 03:25:49.502803 | orchestrator | Wednesday 18 March 2026 03:25:47 +0000 (0:00:00.935) 0:00:41.638 ******* 2026-03-18 03:25:49.502825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:49.502895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:49.502940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.502980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:25:49.503001 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:25:49.503020 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:25:49.503041 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:25:49.503063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.503082 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:25:49.503104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:49.503126 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:25:49.503163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:25:56.933690 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:25:56.933800 | orchestrator | 2026-03-18 03:25:56.933816 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-18 03:25:56.933830 | orchestrator | Wednesday 18 March 2026 03:25:49 +0000 (0:00:01.601) 0:00:43.239 ******* 2026-03-18 03:25:56.933846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.933982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:56.933996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:56.934008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:25:56.934094 | orchestrator | 2026-03-18 03:25:56.934107 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-18 03:25:56.934119 | orchestrator | Wednesday 18 March 2026 03:25:52 +0000 (0:00:02.658) 0:00:45.897 ******* 2026-03-18 03:25:56.934131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.934144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:25:56.934165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.649292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.649422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.649443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.649459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:06.649475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:06.649489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:06.649587 | orchestrator | 2026-03-18 03:26:06.649606 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-18 03:26:06.649643 | orchestrator | Wednesday 18 March 2026 03:25:56 +0000 (0:00:04.777) 0:00:50.675 ******* 2026-03-18 03:26:06.649659 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:26:06.649676 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:26:06.649691 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:26:06.649703 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:26:06.649711 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:26:06.649719 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:26:06.649727 | orchestrator | 2026-03-18 03:26:06.649736 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-18 03:26:06.649744 | orchestrator | Wednesday 18 March 2026 03:25:58 +0000 (0:00:01.585) 0:00:52.260 ******* 2026-03-18 03:26:06.649752 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:26:06.649760 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:26:06.649767 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:26:06.649775 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:06.649783 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:06.649791 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:06.649800 | orchestrator | 2026-03-18 03:26:06.649810 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-18 03:26:06.649820 | orchestrator | Wednesday 18 March 2026 03:25:59 +0000 (0:00:00.646) 0:00:52.907 ******* 2026-03-18 03:26:06.649829 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:06.649838 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:06.649848 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:06.649857 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:26:06.649866 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:26:06.649875 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:26:06.649885 | orchestrator | 2026-03-18 03:26:06.649894 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-18 03:26:06.649903 | orchestrator | Wednesday 18 March 2026 03:26:00 +0000 (0:00:01.808) 0:00:54.715 ******* 2026-03-18 03:26:06.649911 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:06.649920 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:06.649930 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:06.649939 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:26:06.649948 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:26:06.649957 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:26:06.649966 | orchestrator | 2026-03-18 03:26:06.649980 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-18 03:26:06.649994 | orchestrator | Wednesday 18 March 2026 03:26:02 +0000 (0:00:01.490) 0:00:56.205 ******* 2026-03-18 03:26:06.650007 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:26:06.650084 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:26:06.650095 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:26:06.650104 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:26:06.650114 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:26:06.650123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:26:06.650131 | orchestrator | 2026-03-18 03:26:06.650140 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-18 03:26:06.650153 | orchestrator | Wednesday 18 March 2026 03:26:04 +0000 (0:00:01.685) 0:00:57.890 ******* 2026-03-18 03:26:06.650168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.650197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.650211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:06.650229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:07.564570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:07.564681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:07.564740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:07.564764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:07.564786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:07.564804 | orchestrator | 2026-03-18 03:26:07.564818 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-18 03:26:07.564831 | orchestrator | Wednesday 18 March 2026 03:26:06 +0000 (0:00:02.498) 0:01:00.389 ******* 2026-03-18 03:26:07.564843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:07.564875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:07.564889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:07.564908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:07.564919 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:26:07.564932 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:26:07.564943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:07.564955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:07.564966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:07.564977 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:26:07.564989 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:07.565007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147034 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:11.147145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147189 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:11.147203 | orchestrator | 2026-03-18 03:26:11.147216 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-18 03:26:11.147243 | orchestrator | Wednesday 18 March 2026 03:26:07 +0000 (0:00:00.919) 0:01:01.308 ******* 2026-03-18 03:26:11.147254 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:26:11.147265 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:26:11.147276 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:26:11.147286 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:11.147297 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:11.147307 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:11.147318 | orchestrator | 2026-03-18 03:26:11.147329 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-18 03:26:11.147340 | orchestrator | Wednesday 18 March 2026 03:26:08 +0000 (0:00:00.842) 0:01:02.151 ******* 2026-03-18 03:26:11.147352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:11.147379 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:26:11.147391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:11.147446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 03:26:11.147487 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:26:11.147575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147601 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:26:11.147620 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:11.147640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147661 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:11.147682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-18 03:26:11.147702 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:11.147723 | orchestrator | 2026-03-18 03:26:11.147744 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-18 03:26:11.147775 | orchestrator | Wednesday 18 March 2026 03:26:09 +0000 (0:00:00.965) 0:01:03.117 ******* 2026-03-18 03:26:11.147808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:41.606397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:41.606414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-18 03:26:41.606420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-18 03:26:41.606425 | orchestrator | 2026-03-18 03:26:41.606431 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-18 03:26:41.606438 | orchestrator | Wednesday 18 March 2026 03:26:11 +0000 (0:00:01.771) 0:01:04.888 ******* 2026-03-18 03:26:41.606443 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:26:41.606449 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:26:41.606454 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:26:41.606458 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:26:41.606463 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:26:41.606468 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:26:41.606472 | orchestrator | 2026-03-18 03:26:41.606477 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-18 03:26:41.606482 | orchestrator | Wednesday 18 March 2026 03:26:11 +0000 (0:00:00.660) 0:01:05.549 ******* 2026-03-18 03:26:41.606514 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:26:41.606519 | orchestrator | 2026-03-18 03:26:41.606524 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606529 | orchestrator | Wednesday 18 March 2026 03:26:16 +0000 (0:00:04.586) 0:01:10.135 ******* 2026-03-18 03:26:41.606534 | orchestrator | 2026-03-18 03:26:41.606538 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606543 | orchestrator | Wednesday 18 March 2026 03:26:16 +0000 (0:00:00.112) 0:01:10.248 ******* 2026-03-18 03:26:41.606548 | orchestrator | 2026-03-18 03:26:41.606553 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606557 | orchestrator | Wednesday 18 March 2026 03:26:16 +0000 (0:00:00.083) 0:01:10.331 ******* 2026-03-18 03:26:41.606562 | orchestrator | 2026-03-18 03:26:41.606567 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606576 | orchestrator | Wednesday 18 March 2026 03:26:16 +0000 (0:00:00.298) 0:01:10.629 ******* 2026-03-18 03:26:41.606581 | orchestrator | 2026-03-18 03:26:41.606586 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606591 | orchestrator | Wednesday 18 March 2026 03:26:16 +0000 (0:00:00.073) 0:01:10.703 ******* 2026-03-18 03:26:41.606596 | orchestrator | 2026-03-18 03:26:41.606600 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-18 03:26:41.606605 | orchestrator | Wednesday 18 March 2026 03:26:17 +0000 (0:00:00.074) 0:01:10.778 ******* 2026-03-18 03:26:41.606610 | orchestrator | 2026-03-18 03:26:41.606614 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-18 03:26:41.606619 | orchestrator | Wednesday 18 March 2026 03:26:17 +0000 (0:00:00.077) 0:01:10.855 ******* 2026-03-18 03:26:41.606624 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:26:41.606629 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:26:41.606633 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:26:41.606638 | orchestrator | 2026-03-18 03:26:41.606643 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-18 03:26:41.606648 | orchestrator | Wednesday 18 March 2026 03:26:24 +0000 (0:00:07.617) 0:01:18.473 ******* 2026-03-18 03:26:41.606652 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:26:41.606657 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:26:41.606662 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:26:41.606666 | orchestrator | 2026-03-18 03:26:41.606671 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-18 03:26:41.606676 | orchestrator | Wednesday 18 March 2026 03:26:34 +0000 (0:00:09.792) 0:01:28.265 ******* 2026-03-18 03:26:41.606681 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:26:41.606685 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:26:41.606690 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:26:41.606695 | orchestrator | 2026-03-18 03:26:41.606700 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:26:41.606705 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-18 03:26:41.606712 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 03:26:41.606721 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 03:26:42.115897 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-18 03:26:42.116007 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-18 03:26:42.116022 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-18 03:26:42.116034 | orchestrator | 2026-03-18 03:26:42.116046 | orchestrator | 2026-03-18 03:26:42.116058 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:26:42.116071 | orchestrator | Wednesday 18 March 2026 03:26:41 +0000 (0:00:07.073) 0:01:35.339 ******* 2026-03-18 03:26:42.116082 | orchestrator | =============================================================================== 2026-03-18 03:26:42.116093 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.79s 2026-03-18 03:26:42.116104 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.62s 2026-03-18 03:26:42.116114 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 7.07s 2026-03-18 03:26:42.116125 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.78s 2026-03-18 03:26:42.116162 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.59s 2026-03-18 03:26:42.116174 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.08s 2026-03-18 03:26:42.116184 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.96s 2026-03-18 03:26:42.116195 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.56s 2026-03-18 03:26:42.116205 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.19s 2026-03-18 03:26:42.116216 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.66s 2026-03-18 03:26:42.116227 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.50s 2026-03-18 03:26:42.116237 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.32s 2026-03-18 03:26:42.116248 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.81s 2026-03-18 03:26:42.116259 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.77s 2026-03-18 03:26:42.116270 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.73s 2026-03-18 03:26:42.116280 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.69s 2026-03-18 03:26:42.116291 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.60s 2026-03-18 03:26:42.116302 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.59s 2026-03-18 03:26:42.116318 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.55s 2026-03-18 03:26:42.116336 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.49s 2026-03-18 03:26:44.654085 | orchestrator | 2026-03-18 03:26:44 | INFO  | Task 991fa0d5-3b62-4641-94b7-0c342c7db1f1 (aodh) was prepared for execution. 2026-03-18 03:26:44.654180 | orchestrator | 2026-03-18 03:26:44 | INFO  | It takes a moment until task 991fa0d5-3b62-4641-94b7-0c342c7db1f1 (aodh) has been started and output is visible here. 2026-03-18 03:27:16.785641 | orchestrator | 2026-03-18 03:27:16.785726 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:27:16.785735 | orchestrator | 2026-03-18 03:27:16.785741 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:27:16.785746 | orchestrator | Wednesday 18 March 2026 03:26:49 +0000 (0:00:00.300) 0:00:00.300 ******* 2026-03-18 03:27:16.785752 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:27:16.785758 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:27:16.785764 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:27:16.785769 | orchestrator | 2026-03-18 03:27:16.785774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:27:16.785779 | orchestrator | Wednesday 18 March 2026 03:26:49 +0000 (0:00:00.353) 0:00:00.653 ******* 2026-03-18 03:27:16.785784 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-18 03:27:16.785790 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-18 03:27:16.785795 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-18 03:27:16.785800 | orchestrator | 2026-03-18 03:27:16.785805 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-18 03:27:16.785810 | orchestrator | 2026-03-18 03:27:16.785815 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-18 03:27:16.785820 | orchestrator | Wednesday 18 March 2026 03:26:50 +0000 (0:00:00.509) 0:00:01.163 ******* 2026-03-18 03:27:16.785826 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:27:16.785831 | orchestrator | 2026-03-18 03:27:16.785836 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-18 03:27:16.785841 | orchestrator | Wednesday 18 March 2026 03:26:50 +0000 (0:00:00.587) 0:00:01.751 ******* 2026-03-18 03:27:16.785847 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-18 03:27:16.785869 | orchestrator | 2026-03-18 03:27:16.785874 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-18 03:27:16.785879 | orchestrator | Wednesday 18 March 2026 03:26:54 +0000 (0:00:03.451) 0:00:05.203 ******* 2026-03-18 03:27:16.785885 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-18 03:27:16.785890 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-18 03:27:16.785895 | orchestrator | 2026-03-18 03:27:16.785900 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-18 03:27:16.785905 | orchestrator | Wednesday 18 March 2026 03:27:00 +0000 (0:00:06.247) 0:00:11.450 ******* 2026-03-18 03:27:16.785910 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:27:16.785916 | orchestrator | 2026-03-18 03:27:16.785921 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-18 03:27:16.785926 | orchestrator | Wednesday 18 March 2026 03:27:03 +0000 (0:00:03.438) 0:00:14.889 ******* 2026-03-18 03:27:16.785931 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:27:16.785936 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-18 03:27:16.785941 | orchestrator | 2026-03-18 03:27:16.785946 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-18 03:27:16.785951 | orchestrator | Wednesday 18 March 2026 03:27:07 +0000 (0:00:03.905) 0:00:18.795 ******* 2026-03-18 03:27:16.785956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:27:16.785961 | orchestrator | 2026-03-18 03:27:16.785966 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-18 03:27:16.785971 | orchestrator | Wednesday 18 March 2026 03:27:10 +0000 (0:00:03.165) 0:00:21.960 ******* 2026-03-18 03:27:16.785976 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-18 03:27:16.785981 | orchestrator | 2026-03-18 03:27:16.785986 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-18 03:27:16.785991 | orchestrator | Wednesday 18 March 2026 03:27:14 +0000 (0:00:03.775) 0:00:25.736 ******* 2026-03-18 03:27:16.785999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:16.786060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:16.786068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:16.786079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:16.786086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:16.786095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:16.786103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:16.786126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:18.299793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:18.299930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:18.299951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:18.299962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:18.299971 | orchestrator | 2026-03-18 03:27:18.299983 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-18 03:27:18.299995 | orchestrator | Wednesday 18 March 2026 03:27:16 +0000 (0:00:02.098) 0:00:27.835 ******* 2026-03-18 03:27:18.300004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:27:18.300015 | orchestrator | 2026-03-18 03:27:18.300024 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-18 03:27:18.300032 | orchestrator | Wednesday 18 March 2026 03:27:16 +0000 (0:00:00.136) 0:00:27.972 ******* 2026-03-18 03:27:18.300041 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:27:18.300050 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:27:18.300060 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:27:18.300069 | orchestrator | 2026-03-18 03:27:18.300078 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-18 03:27:18.300087 | orchestrator | Wednesday 18 March 2026 03:27:17 +0000 (0:00:00.579) 0:00:28.551 ******* 2026-03-18 03:27:18.300097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:18.300136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:18.300146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:18.300156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:18.300165 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:27:18.300175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:18.300185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:18.300194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:18.300223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:23.454282 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:27:23.454395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:23.454415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:23.454430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:23.454442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:23.454453 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:27:23.454518 | orchestrator | 2026-03-18 03:27:23.454539 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-18 03:27:23.454558 | orchestrator | Wednesday 18 March 2026 03:27:18 +0000 (0:00:00.803) 0:00:29.355 ******* 2026-03-18 03:27:23.454577 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:27:23.454596 | orchestrator | 2026-03-18 03:27:23.454616 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-18 03:27:23.454680 | orchestrator | Wednesday 18 March 2026 03:27:19 +0000 (0:00:00.809) 0:00:30.164 ******* 2026-03-18 03:27:23.454702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:23.454738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:23.454751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:23.454762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:23.454774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:23.454794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:23.454806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:23.454828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:24.166208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:24.166291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:24.166300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:24.166308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:24.166334 | orchestrator | 2026-03-18 03:27:24.166346 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-18 03:27:24.166359 | orchestrator | Wednesday 18 March 2026 03:27:23 +0000 (0:00:04.340) 0:00:34.505 ******* 2026-03-18 03:27:24.166374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:24.166387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:24.166417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:24.166430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:24.166440 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:27:24.166455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:24.166530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:24.166543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:24.166555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:24.166566 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:27:24.166587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:25.320195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:25.320355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320399 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:27:25.320410 | orchestrator | 2026-03-18 03:27:25.320419 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-18 03:27:25.320428 | orchestrator | Wednesday 18 March 2026 03:27:24 +0000 (0:00:00.716) 0:00:35.221 ******* 2026-03-18 03:27:25.320437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:25.320447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:25.320456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320628 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:27:25.320642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:25.320664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:25.320678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:25.320703 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:27:25.320724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-18 03:27:29.400173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 03:27:29.400267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 03:27:29.400274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 03:27:29.400280 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:27:29.400286 | orchestrator | 2026-03-18 03:27:29.400292 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-18 03:27:29.400298 | orchestrator | Wednesday 18 March 2026 03:27:25 +0000 (0:00:01.155) 0:00:36.377 ******* 2026-03-18 03:27:29.400303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:29.400309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:29.400325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:29.400334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:29.400367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555157 | orchestrator | 2026-03-18 03:27:38.555163 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-18 03:27:38.555169 | orchestrator | Wednesday 18 March 2026 03:27:29 +0000 (0:00:04.077) 0:00:40.455 ******* 2026-03-18 03:27:38.555174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:38.555180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:38.555184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:38.555213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:38.555285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678417 | orchestrator | 2026-03-18 03:27:43.678429 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-18 03:27:43.678437 | orchestrator | Wednesday 18 March 2026 03:27:38 +0000 (0:00:09.153) 0:00:49.608 ******* 2026-03-18 03:27:43.678444 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:27:43.678497 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:27:43.678504 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:27:43.678510 | orchestrator | 2026-03-18 03:27:43.678517 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-18 03:27:43.678523 | orchestrator | Wednesday 18 March 2026 03:27:40 +0000 (0:00:01.834) 0:00:51.442 ******* 2026-03-18 03:27:43.678531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:43.678540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:43.678566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-18 03:27:43.678587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:27:43.678645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:28:44.795401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-18 03:28:44.795552 | orchestrator | 2026-03-18 03:28:44.795569 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-18 03:28:44.795581 | orchestrator | Wednesday 18 March 2026 03:27:43 +0000 (0:00:03.291) 0:00:54.734 ******* 2026-03-18 03:28:44.795592 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:28:44.795603 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:28:44.795613 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:28:44.795623 | orchestrator | 2026-03-18 03:28:44.795633 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-18 03:28:44.795643 | orchestrator | Wednesday 18 March 2026 03:27:44 +0000 (0:00:00.337) 0:00:55.072 ******* 2026-03-18 03:28:44.795653 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.795663 | orchestrator | 2026-03-18 03:28:44.795672 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-18 03:28:44.795682 | orchestrator | Wednesday 18 March 2026 03:27:46 +0000 (0:00:02.168) 0:00:57.240 ******* 2026-03-18 03:28:44.795692 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.795701 | orchestrator | 2026-03-18 03:28:44.795724 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-18 03:28:44.795744 | orchestrator | Wednesday 18 March 2026 03:27:48 +0000 (0:00:02.211) 0:00:59.452 ******* 2026-03-18 03:28:44.795754 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.795785 | orchestrator | 2026-03-18 03:28:44.795796 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-18 03:28:44.795805 | orchestrator | Wednesday 18 March 2026 03:28:01 +0000 (0:00:13.531) 0:01:12.984 ******* 2026-03-18 03:28:44.795815 | orchestrator | 2026-03-18 03:28:44.795824 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-18 03:28:44.795834 | orchestrator | Wednesday 18 March 2026 03:28:02 +0000 (0:00:00.085) 0:01:13.070 ******* 2026-03-18 03:28:44.795844 | orchestrator | 2026-03-18 03:28:44.795853 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-18 03:28:44.795862 | orchestrator | Wednesday 18 March 2026 03:28:02 +0000 (0:00:00.075) 0:01:13.145 ******* 2026-03-18 03:28:44.795872 | orchestrator | 2026-03-18 03:28:44.795881 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-18 03:28:44.795891 | orchestrator | Wednesday 18 March 2026 03:28:02 +0000 (0:00:00.288) 0:01:13.434 ******* 2026-03-18 03:28:44.795900 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:28:44.795910 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.795920 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:28:44.795929 | orchestrator | 2026-03-18 03:28:44.795940 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-18 03:28:44.795951 | orchestrator | Wednesday 18 March 2026 03:28:13 +0000 (0:00:10.844) 0:01:24.279 ******* 2026-03-18 03:28:44.795962 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.795973 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:28:44.795984 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:28:44.795995 | orchestrator | 2026-03-18 03:28:44.796007 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-18 03:28:44.796018 | orchestrator | Wednesday 18 March 2026 03:28:23 +0000 (0:00:10.410) 0:01:34.690 ******* 2026-03-18 03:28:44.796028 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.796039 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:28:44.796050 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:28:44.796061 | orchestrator | 2026-03-18 03:28:44.796072 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-18 03:28:44.796082 | orchestrator | Wednesday 18 March 2026 03:28:34 +0000 (0:00:10.491) 0:01:45.181 ******* 2026-03-18 03:28:44.796093 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:28:44.796103 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:28:44.796114 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:28:44.796125 | orchestrator | 2026-03-18 03:28:44.796136 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:28:44.796148 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:28:44.796160 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:28:44.796172 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:28:44.796183 | orchestrator | 2026-03-18 03:28:44.796193 | orchestrator | 2026-03-18 03:28:44.796204 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:28:44.796216 | orchestrator | Wednesday 18 March 2026 03:28:44 +0000 (0:00:10.290) 0:01:55.472 ******* 2026-03-18 03:28:44.796226 | orchestrator | =============================================================================== 2026-03-18 03:28:44.796237 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.53s 2026-03-18 03:28:44.796249 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.84s 2026-03-18 03:28:44.796275 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.49s 2026-03-18 03:28:44.796287 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.41s 2026-03-18 03:28:44.796304 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.29s 2026-03-18 03:28:44.796314 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.15s 2026-03-18 03:28:44.796323 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.25s 2026-03-18 03:28:44.796333 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.34s 2026-03-18 03:28:44.796342 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.08s 2026-03-18 03:28:44.796352 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.91s 2026-03-18 03:28:44.796361 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.78s 2026-03-18 03:28:44.796370 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.45s 2026-03-18 03:28:44.796380 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.44s 2026-03-18 03:28:44.796389 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.29s 2026-03-18 03:28:44.796399 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.17s 2026-03-18 03:28:44.796408 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.21s 2026-03-18 03:28:44.796471 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.17s 2026-03-18 03:28:44.796482 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.10s 2026-03-18 03:28:44.796492 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.83s 2026-03-18 03:28:44.796501 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.15s 2026-03-18 03:28:47.351931 | orchestrator | 2026-03-18 03:28:47 | INFO  | Task a1263524-d5dd-468b-a049-3ab2b75b0a30 (kolla-ceph-rgw) was prepared for execution. 2026-03-18 03:28:47.352024 | orchestrator | 2026-03-18 03:28:47 | INFO  | It takes a moment until task a1263524-d5dd-468b-a049-3ab2b75b0a30 (kolla-ceph-rgw) has been started and output is visible here. 2026-03-18 03:29:24.692033 | orchestrator | 2026-03-18 03:29:24.692116 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:29:24.692123 | orchestrator | 2026-03-18 03:29:24.692128 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:29:24.692132 | orchestrator | Wednesday 18 March 2026 03:28:51 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-03-18 03:29:24.692137 | orchestrator | ok: [testbed-manager] 2026-03-18 03:29:24.692142 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:29:24.692146 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:29:24.692150 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:29:24.692154 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:29:24.692157 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:29:24.692161 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:29:24.692165 | orchestrator | 2026-03-18 03:29:24.692169 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:29:24.692172 | orchestrator | Wednesday 18 March 2026 03:28:52 +0000 (0:00:00.894) 0:00:01.185 ******* 2026-03-18 03:29:24.692177 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692192 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692196 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692200 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692204 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692208 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692211 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-18 03:29:24.692215 | orchestrator | 2026-03-18 03:29:24.692219 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-18 03:29:24.692223 | orchestrator | 2026-03-18 03:29:24.692226 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-18 03:29:24.692244 | orchestrator | Wednesday 18 March 2026 03:28:53 +0000 (0:00:00.753) 0:00:01.938 ******* 2026-03-18 03:29:24.692248 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:29:24.692254 | orchestrator | 2026-03-18 03:29:24.692258 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-18 03:29:24.692261 | orchestrator | Wednesday 18 March 2026 03:28:55 +0000 (0:00:01.640) 0:00:03.578 ******* 2026-03-18 03:29:24.692265 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-18 03:29:24.692269 | orchestrator | 2026-03-18 03:29:24.692274 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-18 03:29:24.692277 | orchestrator | Wednesday 18 March 2026 03:28:58 +0000 (0:00:03.840) 0:00:07.419 ******* 2026-03-18 03:29:24.692282 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-18 03:29:24.692287 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-18 03:29:24.692291 | orchestrator | 2026-03-18 03:29:24.692295 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-18 03:29:24.692299 | orchestrator | Wednesday 18 March 2026 03:29:05 +0000 (0:00:06.489) 0:00:13.908 ******* 2026-03-18 03:29:24.692302 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-18 03:29:24.692306 | orchestrator | 2026-03-18 03:29:24.692310 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-18 03:29:24.692314 | orchestrator | Wednesday 18 March 2026 03:29:08 +0000 (0:00:03.190) 0:00:17.099 ******* 2026-03-18 03:29:24.692317 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:29:24.692322 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-18 03:29:24.692325 | orchestrator | 2026-03-18 03:29:24.692329 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-18 03:29:24.692333 | orchestrator | Wednesday 18 March 2026 03:29:12 +0000 (0:00:04.001) 0:00:21.101 ******* 2026-03-18 03:29:24.692337 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-18 03:29:24.692341 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-18 03:29:24.692344 | orchestrator | 2026-03-18 03:29:24.692348 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-18 03:29:24.692352 | orchestrator | Wednesday 18 March 2026 03:29:19 +0000 (0:00:06.393) 0:00:27.494 ******* 2026-03-18 03:29:24.692355 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-18 03:29:24.692359 | orchestrator | 2026-03-18 03:29:24.692363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:29:24.692367 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692372 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692375 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692379 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692383 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692429 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692434 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:24.692442 | orchestrator | 2026-03-18 03:29:24.692446 | orchestrator | 2026-03-18 03:29:24.692449 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:29:24.692453 | orchestrator | Wednesday 18 March 2026 03:29:24 +0000 (0:00:05.061) 0:00:32.556 ******* 2026-03-18 03:29:24.692457 | orchestrator | =============================================================================== 2026-03-18 03:29:24.692461 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.49s 2026-03-18 03:29:24.692464 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.39s 2026-03-18 03:29:24.692468 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.06s 2026-03-18 03:29:24.692475 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.00s 2026-03-18 03:29:24.692479 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.84s 2026-03-18 03:29:24.692483 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.19s 2026-03-18 03:29:24.692487 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.64s 2026-03-18 03:29:24.692491 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2026-03-18 03:29:24.692495 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-03-18 03:29:27.303517 | orchestrator | 2026-03-18 03:29:27 | INFO  | Task 02cde443-b707-422b-891c-6336bf940413 (gnocchi) was prepared for execution. 2026-03-18 03:29:27.303607 | orchestrator | 2026-03-18 03:29:27 | INFO  | It takes a moment until task 02cde443-b707-422b-891c-6336bf940413 (gnocchi) has been started and output is visible here. 2026-03-18 03:29:32.988846 | orchestrator | 2026-03-18 03:29:32.988991 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:29:32.989022 | orchestrator | 2026-03-18 03:29:32.989040 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:29:32.989052 | orchestrator | Wednesday 18 March 2026 03:29:31 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-03-18 03:29:32.989064 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:29:32.989077 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:29:32.989088 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:29:32.989098 | orchestrator | 2026-03-18 03:29:32.989109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:29:32.989120 | orchestrator | Wednesday 18 March 2026 03:29:32 +0000 (0:00:00.386) 0:00:00.657 ******* 2026-03-18 03:29:32.989131 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-18 03:29:32.989143 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-18 03:29:32.989154 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-18 03:29:32.989165 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-18 03:29:32.989176 | orchestrator | 2026-03-18 03:29:32.989186 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-18 03:29:32.989197 | orchestrator | skipping: no hosts matched 2026-03-18 03:29:32.989209 | orchestrator | 2026-03-18 03:29:32.989220 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:29:32.989232 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:32.989244 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:32.989254 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:29:32.989265 | orchestrator | 2026-03-18 03:29:32.989276 | orchestrator | 2026-03-18 03:29:32.989286 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:29:32.989297 | orchestrator | Wednesday 18 March 2026 03:29:32 +0000 (0:00:00.390) 0:00:01.047 ******* 2026-03-18 03:29:32.989340 | orchestrator | =============================================================================== 2026-03-18 03:29:32.989352 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-03-18 03:29:32.989364 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2026-03-18 03:29:35.499800 | orchestrator | 2026-03-18 03:29:35 | INFO  | Task 2026e503-1b70-422c-a3b9-a2b14d90e69a (manila) was prepared for execution. 2026-03-18 03:29:35.499926 | orchestrator | 2026-03-18 03:29:35 | INFO  | It takes a moment until task 2026e503-1b70-422c-a3b9-a2b14d90e69a (manila) has been started and output is visible here. 2026-03-18 03:30:17.678936 | orchestrator | 2026-03-18 03:30:17.679045 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:30:17.679062 | orchestrator | 2026-03-18 03:30:17.679075 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:30:17.679086 | orchestrator | Wednesday 18 March 2026 03:29:39 +0000 (0:00:00.313) 0:00:00.313 ******* 2026-03-18 03:30:17.679098 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:30:17.679111 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:30:17.679122 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:30:17.679133 | orchestrator | 2026-03-18 03:30:17.679144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:30:17.679155 | orchestrator | Wednesday 18 March 2026 03:29:40 +0000 (0:00:00.343) 0:00:00.656 ******* 2026-03-18 03:30:17.679166 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-18 03:30:17.679177 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-18 03:30:17.679188 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-18 03:30:17.679199 | orchestrator | 2026-03-18 03:30:17.679210 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-18 03:30:17.679221 | orchestrator | 2026-03-18 03:30:17.679232 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-18 03:30:17.679243 | orchestrator | Wednesday 18 March 2026 03:29:40 +0000 (0:00:00.472) 0:00:01.129 ******* 2026-03-18 03:30:17.679254 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:30:17.679266 | orchestrator | 2026-03-18 03:30:17.679276 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-18 03:30:17.679287 | orchestrator | Wednesday 18 March 2026 03:29:41 +0000 (0:00:00.588) 0:00:01.718 ******* 2026-03-18 03:30:17.679315 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:17.679327 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:30:17.679338 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:30:17.679349 | orchestrator | 2026-03-18 03:30:17.679360 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-18 03:30:17.679371 | orchestrator | Wednesday 18 March 2026 03:29:41 +0000 (0:00:00.486) 0:00:02.205 ******* 2026-03-18 03:30:17.679443 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-18 03:30:17.679456 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-18 03:30:17.679469 | orchestrator | 2026-03-18 03:30:17.679481 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-18 03:30:17.679494 | orchestrator | Wednesday 18 March 2026 03:29:48 +0000 (0:00:06.483) 0:00:08.688 ******* 2026-03-18 03:30:17.679507 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-18 03:30:17.679519 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-18 03:30:17.679532 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-18 03:30:17.679545 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-18 03:30:17.679578 | orchestrator | 2026-03-18 03:30:17.679591 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-18 03:30:17.679603 | orchestrator | Wednesday 18 March 2026 03:30:01 +0000 (0:00:12.916) 0:00:21.605 ******* 2026-03-18 03:30:17.679616 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:30:17.679628 | orchestrator | 2026-03-18 03:30:17.679641 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-18 03:30:17.679653 | orchestrator | Wednesday 18 March 2026 03:30:04 +0000 (0:00:03.219) 0:00:24.824 ******* 2026-03-18 03:30:17.679666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:30:17.679678 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-18 03:30:17.679690 | orchestrator | 2026-03-18 03:30:17.679702 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-18 03:30:17.679714 | orchestrator | Wednesday 18 March 2026 03:30:08 +0000 (0:00:03.838) 0:00:28.663 ******* 2026-03-18 03:30:17.679727 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:30:17.679739 | orchestrator | 2026-03-18 03:30:17.679751 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-18 03:30:17.679763 | orchestrator | Wednesday 18 March 2026 03:30:11 +0000 (0:00:03.181) 0:00:31.844 ******* 2026-03-18 03:30:17.679776 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-18 03:30:17.679788 | orchestrator | 2026-03-18 03:30:17.679800 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-18 03:30:17.679811 | orchestrator | Wednesday 18 March 2026 03:30:15 +0000 (0:00:03.914) 0:00:35.758 ******* 2026-03-18 03:30:17.679845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:17.679861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:17.679878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:17.679899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:17.679912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:17.679923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:17.679944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:28.734621 | orchestrator | 2026-03-18 03:30:28.734629 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-18 03:30:28.734637 | orchestrator | Wednesday 18 March 2026 03:30:17 +0000 (0:00:02.438) 0:00:38.197 ******* 2026-03-18 03:30:28.734644 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:30:28.734651 | orchestrator | 2026-03-18 03:30:28.734657 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-18 03:30:28.734664 | orchestrator | Wednesday 18 March 2026 03:30:18 +0000 (0:00:00.692) 0:00:38.889 ******* 2026-03-18 03:30:28.734670 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:30:28.734677 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:30:28.734684 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:30:28.734690 | orchestrator | 2026-03-18 03:30:28.734696 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-18 03:30:28.734702 | orchestrator | Wednesday 18 March 2026 03:30:19 +0000 (0:00:00.989) 0:00:39.879 ******* 2026-03-18 03:30:28.734710 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734737 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734743 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734750 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734761 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734768 | orchestrator | 2026-03-18 03:30:28.734778 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-18 03:30:28.734784 | orchestrator | Wednesday 18 March 2026 03:30:21 +0000 (0:00:01.982) 0:00:41.861 ******* 2026-03-18 03:30:28.734791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734797 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734809 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-18 03:30:28.734822 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-18 03:30:28.734828 | orchestrator | 2026-03-18 03:30:28.734835 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-18 03:30:28.734841 | orchestrator | Wednesday 18 March 2026 03:30:22 +0000 (0:00:01.240) 0:00:43.102 ******* 2026-03-18 03:30:28.734848 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-18 03:30:28.734855 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-18 03:30:28.734861 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-18 03:30:28.734867 | orchestrator | 2026-03-18 03:30:28.734873 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-18 03:30:28.734880 | orchestrator | Wednesday 18 March 2026 03:30:23 +0000 (0:00:00.697) 0:00:43.800 ******* 2026-03-18 03:30:28.734886 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:28.734892 | orchestrator | 2026-03-18 03:30:28.734908 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-18 03:30:28.734915 | orchestrator | Wednesday 18 March 2026 03:30:23 +0000 (0:00:00.143) 0:00:43.943 ******* 2026-03-18 03:30:28.734921 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:28.734934 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:30:28.734940 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:30:28.734946 | orchestrator | 2026-03-18 03:30:28.734952 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-18 03:30:28.734958 | orchestrator | Wednesday 18 March 2026 03:30:24 +0000 (0:00:00.564) 0:00:44.508 ******* 2026-03-18 03:30:28.734965 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:30:28.734971 | orchestrator | 2026-03-18 03:30:28.734977 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-18 03:30:28.734983 | orchestrator | Wednesday 18 March 2026 03:30:24 +0000 (0:00:00.668) 0:00:45.176 ******* 2026-03-18 03:30:28.734995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:29.669000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:29.669133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:29.669161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:29.669368 | orchestrator | 2026-03-18 03:30:29.669408 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-18 03:30:29.669432 | orchestrator | Wednesday 18 March 2026 03:30:28 +0000 (0:00:04.075) 0:00:49.252 ******* 2026-03-18 03:30:29.669453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:30.355867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.355959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.355973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.355983 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:30.355995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:30.356026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356076 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:30:30.356086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:30.356095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:30.356129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:30:30.356138 | orchestrator | 2026-03-18 03:30:30.356147 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-18 03:30:30.356157 | orchestrator | Wednesday 18 March 2026 03:30:29 +0000 (0:00:00.937) 0:00:50.190 ******* 2026-03-18 03:30:30.356174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:35.079655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079845 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:35.079854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:35.079861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079900 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:30:35.079905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:35.079912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:35.079936 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:30:35.079941 | orchestrator | 2026-03-18 03:30:35.079948 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-18 03:30:35.079956 | orchestrator | Wednesday 18 March 2026 03:30:30 +0000 (0:00:00.987) 0:00:51.177 ******* 2026-03-18 03:30:35.079971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:42.169516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:42.169635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:42.169691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:42.169893 | orchestrator | 2026-03-18 03:30:42.169912 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-18 03:30:42.169930 | orchestrator | Wednesday 18 March 2026 03:30:35 +0000 (0:00:04.653) 0:00:55.830 ******* 2026-03-18 03:30:42.169965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:46.788257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:46.788458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:30:46.788491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:46.788536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:46.788630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:46.788686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:30:46.788749 | orchestrator | 2026-03-18 03:30:46.788773 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-18 03:30:46.788797 | orchestrator | Wednesday 18 March 2026 03:30:42 +0000 (0:00:06.860) 0:01:02.691 ******* 2026-03-18 03:30:46.788818 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-18 03:30:46.788838 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-18 03:30:46.788859 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-18 03:30:46.788878 | orchestrator | 2026-03-18 03:30:46.788902 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-18 03:30:46.788934 | orchestrator | Wednesday 18 March 2026 03:30:46 +0000 (0:00:03.901) 0:01:06.593 ******* 2026-03-18 03:30:46.788972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:50.156138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156282 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:30:50.156293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:50.156315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156445 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:30:50.156453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-18 03:30:50.156461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 03:30:50.156497 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:30:50.156505 | orchestrator | 2026-03-18 03:30:50.156513 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-18 03:30:50.156522 | orchestrator | Wednesday 18 March 2026 03:30:46 +0000 (0:00:00.713) 0:01:07.306 ******* 2026-03-18 03:30:50.156537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:31:31.962714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:31:31.962815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-18 03:31:31.962829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-18 03:31:31.962968 | orchestrator | 2026-03-18 03:31:31.962978 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-18 03:31:31.962988 | orchestrator | Wednesday 18 March 2026 03:30:50 +0000 (0:00:03.369) 0:01:10.675 ******* 2026-03-18 03:31:31.962997 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:31:31.963006 | orchestrator | 2026-03-18 03:31:31.963014 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-18 03:31:31.963021 | orchestrator | Wednesday 18 March 2026 03:30:52 +0000 (0:00:02.134) 0:01:12.810 ******* 2026-03-18 03:31:31.963029 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:31:31.963037 | orchestrator | 2026-03-18 03:31:31.963045 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-18 03:31:31.963053 | orchestrator | Wednesday 18 March 2026 03:30:54 +0000 (0:00:02.329) 0:01:15.139 ******* 2026-03-18 03:31:31.963061 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:31:31.963068 | orchestrator | 2026-03-18 03:31:31.963076 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-18 03:31:31.963085 | orchestrator | Wednesday 18 March 2026 03:31:31 +0000 (0:00:36.978) 0:01:52.118 ******* 2026-03-18 03:31:31.963099 | orchestrator | 2026-03-18 03:31:31.963120 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-18 03:32:23.151491 | orchestrator | Wednesday 18 March 2026 03:31:31 +0000 (0:00:00.075) 0:01:52.193 ******* 2026-03-18 03:32:23.151590 | orchestrator | 2026-03-18 03:32:23.151602 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-18 03:32:23.151610 | orchestrator | Wednesday 18 March 2026 03:31:31 +0000 (0:00:00.082) 0:01:52.275 ******* 2026-03-18 03:32:23.151618 | orchestrator | 2026-03-18 03:32:23.151625 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-18 03:32:23.151633 | orchestrator | Wednesday 18 March 2026 03:31:31 +0000 (0:00:00.092) 0:01:52.367 ******* 2026-03-18 03:32:23.151641 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:32:23.151649 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:32:23.151657 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:32:23.151664 | orchestrator | 2026-03-18 03:32:23.151672 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-18 03:32:23.151679 | orchestrator | Wednesday 18 March 2026 03:31:46 +0000 (0:00:14.996) 0:02:07.364 ******* 2026-03-18 03:32:23.151686 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:32:23.151694 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:32:23.151701 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:32:23.151708 | orchestrator | 2026-03-18 03:32:23.151715 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-18 03:32:23.151723 | orchestrator | Wednesday 18 March 2026 03:31:58 +0000 (0:00:11.241) 0:02:18.606 ******* 2026-03-18 03:32:23.151730 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:32:23.151737 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:32:23.151766 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:32:23.151773 | orchestrator | 2026-03-18 03:32:23.151781 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-18 03:32:23.151790 | orchestrator | Wednesday 18 March 2026 03:32:03 +0000 (0:00:05.284) 0:02:23.891 ******* 2026-03-18 03:32:23.151802 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:32:23.151813 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:32:23.151824 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:32:23.151836 | orchestrator | 2026-03-18 03:32:23.151847 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:32:23.151860 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:32:23.151874 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:32:23.151884 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:32:23.151896 | orchestrator | 2026-03-18 03:32:23.151907 | orchestrator | 2026-03-18 03:32:23.151918 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:32:23.151929 | orchestrator | Wednesday 18 March 2026 03:32:22 +0000 (0:00:19.168) 0:02:43.059 ******* 2026-03-18 03:32:23.151940 | orchestrator | =============================================================================== 2026-03-18 03:32:23.151951 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.98s 2026-03-18 03:32:23.151961 | orchestrator | manila : Restart manila-share container -------------------------------- 19.17s 2026-03-18 03:32:23.151972 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.00s 2026-03-18 03:32:23.151983 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.92s 2026-03-18 03:32:23.151993 | orchestrator | manila : Restart manila-data container --------------------------------- 11.24s 2026-03-18 03:32:23.152004 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.86s 2026-03-18 03:32:23.152014 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.48s 2026-03-18 03:32:23.152025 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.28s 2026-03-18 03:32:23.152051 | orchestrator | manila : Copying over config.json files for services -------------------- 4.65s 2026-03-18 03:32:23.152063 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.08s 2026-03-18 03:32:23.152074 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.91s 2026-03-18 03:32:23.152086 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.90s 2026-03-18 03:32:23.152100 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.84s 2026-03-18 03:32:23.152112 | orchestrator | manila : Check manila containers ---------------------------------------- 3.37s 2026-03-18 03:32:23.152124 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.22s 2026-03-18 03:32:23.152136 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.18s 2026-03-18 03:32:23.152149 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.44s 2026-03-18 03:32:23.152160 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.33s 2026-03-18 03:32:23.152171 | orchestrator | manila : Creating Manila database --------------------------------------- 2.13s 2026-03-18 03:32:23.152182 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.98s 2026-03-18 03:32:23.512950 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-18 03:32:35.802774 | orchestrator | 2026-03-18 03:32:35 | INFO  | Task 02f47132-1588-450d-ab4e-3ce64eda8a50 (netdata) was prepared for execution. 2026-03-18 03:32:35.803501 | orchestrator | 2026-03-18 03:32:35 | INFO  | It takes a moment until task 02f47132-1588-450d-ab4e-3ce64eda8a50 (netdata) has been started and output is visible here. 2026-03-18 03:34:13.360790 | orchestrator | 2026-03-18 03:34:13.360896 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:34:13.360916 | orchestrator | 2026-03-18 03:34:13.360933 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:34:13.360949 | orchestrator | Wednesday 18 March 2026 03:32:40 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-03-18 03:34:13.360965 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-18 03:34:13.360981 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-18 03:34:13.360995 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-18 03:34:13.361010 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-18 03:34:13.361025 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-18 03:34:13.361041 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-18 03:34:13.361056 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-18 03:34:13.361072 | orchestrator | 2026-03-18 03:34:13.361087 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-18 03:34:13.361103 | orchestrator | 2026-03-18 03:34:13.361120 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-18 03:34:13.361136 | orchestrator | Wednesday 18 March 2026 03:32:41 +0000 (0:00:00.953) 0:00:01.217 ******* 2026-03-18 03:34:13.361148 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:34:13.361159 | orchestrator | 2026-03-18 03:34:13.361168 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-18 03:34:13.361177 | orchestrator | Wednesday 18 March 2026 03:32:43 +0000 (0:00:01.433) 0:00:02.651 ******* 2026-03-18 03:34:13.361186 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:13.361196 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:13.361204 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:13.361213 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:13.361222 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:13.361231 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:13.361239 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:13.361248 | orchestrator | 2026-03-18 03:34:13.361256 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-18 03:34:13.361265 | orchestrator | Wednesday 18 March 2026 03:32:45 +0000 (0:00:01.989) 0:00:04.641 ******* 2026-03-18 03:34:13.361274 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:13.361282 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:13.361290 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:13.361299 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:13.361307 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:13.361342 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:13.361352 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:13.361362 | orchestrator | 2026-03-18 03:34:13.361372 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-18 03:34:13.361382 | orchestrator | Wednesday 18 March 2026 03:32:47 +0000 (0:00:02.345) 0:00:06.987 ******* 2026-03-18 03:34:13.361392 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.361402 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:34:13.361412 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:34:13.361422 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:34:13.361432 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:34:13.361442 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:34:13.361451 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:34:13.361461 | orchestrator | 2026-03-18 03:34:13.361470 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-18 03:34:13.361503 | orchestrator | Wednesday 18 March 2026 03:32:49 +0000 (0:00:01.759) 0:00:08.746 ******* 2026-03-18 03:34:13.361513 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.361523 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:34:13.361532 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:34:13.361542 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:34:13.361552 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:34:13.361577 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:34:13.361587 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:34:13.361596 | orchestrator | 2026-03-18 03:34:13.361606 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-18 03:34:13.361616 | orchestrator | Wednesday 18 March 2026 03:33:05 +0000 (0:00:15.991) 0:00:24.738 ******* 2026-03-18 03:34:13.361626 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:34:13.361635 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:34:13.361645 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:34:13.361654 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.361662 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:34:13.361670 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:34:13.361679 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:34:13.361687 | orchestrator | 2026-03-18 03:34:13.361696 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-18 03:34:13.361704 | orchestrator | Wednesday 18 March 2026 03:33:47 +0000 (0:00:42.177) 0:01:06.915 ******* 2026-03-18 03:34:13.361714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:34:13.361724 | orchestrator | 2026-03-18 03:34:13.361733 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-18 03:34:13.361741 | orchestrator | Wednesday 18 March 2026 03:33:49 +0000 (0:00:01.677) 0:01:08.592 ******* 2026-03-18 03:34:13.361750 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-18 03:34:13.361759 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-18 03:34:13.361767 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-18 03:34:13.361776 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-18 03:34:13.361801 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-18 03:34:13.361811 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-18 03:34:13.361819 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-18 03:34:13.361828 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-18 03:34:13.361837 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-18 03:34:13.361845 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-18 03:34:13.361854 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-18 03:34:13.361862 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-18 03:34:13.361871 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-18 03:34:13.361879 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-18 03:34:13.361888 | orchestrator | 2026-03-18 03:34:13.361897 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-18 03:34:13.361907 | orchestrator | Wednesday 18 March 2026 03:33:52 +0000 (0:00:03.396) 0:01:11.989 ******* 2026-03-18 03:34:13.361915 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:13.361924 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:13.361932 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:13.361941 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:13.361949 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:13.361957 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:13.361966 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:13.361974 | orchestrator | 2026-03-18 03:34:13.361983 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-18 03:34:13.361999 | orchestrator | Wednesday 18 March 2026 03:33:53 +0000 (0:00:01.343) 0:01:13.333 ******* 2026-03-18 03:34:13.362008 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:34:13.362080 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.362090 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:34:13.362099 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:34:13.362107 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:34:13.362116 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:34:13.362155 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:34:13.362169 | orchestrator | 2026-03-18 03:34:13.362185 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-18 03:34:13.362200 | orchestrator | Wednesday 18 March 2026 03:33:55 +0000 (0:00:01.330) 0:01:14.664 ******* 2026-03-18 03:34:13.362215 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:13.362230 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:13.362240 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:13.362249 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:13.362257 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:13.362266 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:13.362274 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:13.362282 | orchestrator | 2026-03-18 03:34:13.362291 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-18 03:34:13.362300 | orchestrator | Wednesday 18 March 2026 03:33:56 +0000 (0:00:01.397) 0:01:16.061 ******* 2026-03-18 03:34:13.362308 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:13.362339 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:13.362348 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:13.362357 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:13.362365 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:13.362374 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:13.362382 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:13.362390 | orchestrator | 2026-03-18 03:34:13.362399 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-18 03:34:13.362408 | orchestrator | Wednesday 18 March 2026 03:33:58 +0000 (0:00:01.755) 0:01:17.817 ******* 2026-03-18 03:34:13.362416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-18 03:34:13.362427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:34:13.362436 | orchestrator | 2026-03-18 03:34:13.362445 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-18 03:34:13.362460 | orchestrator | Wednesday 18 March 2026 03:33:59 +0000 (0:00:01.427) 0:01:19.245 ******* 2026-03-18 03:34:13.362469 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.362477 | orchestrator | 2026-03-18 03:34:13.362486 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-18 03:34:13.362495 | orchestrator | Wednesday 18 March 2026 03:34:02 +0000 (0:00:02.341) 0:01:21.586 ******* 2026-03-18 03:34:13.362503 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:34:13.362512 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:34:13.362525 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:34:13.362539 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:34:13.362554 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:34:13.362569 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:34:13.362583 | orchestrator | changed: [testbed-manager] 2026-03-18 03:34:13.362597 | orchestrator | 2026-03-18 03:34:13.362610 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:34:13.362625 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.362642 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.362669 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.362684 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.362710 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.854926 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.855038 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:34:13.855071 | orchestrator | 2026-03-18 03:34:13.855086 | orchestrator | 2026-03-18 03:34:13.855102 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:34:13.855118 | orchestrator | Wednesday 18 March 2026 03:34:13 +0000 (0:00:11.266) 0:01:32.853 ******* 2026-03-18 03:34:13.855132 | orchestrator | =============================================================================== 2026-03-18 03:34:13.855146 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.18s 2026-03-18 03:34:13.855160 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.99s 2026-03-18 03:34:13.855174 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.27s 2026-03-18 03:34:13.855189 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.40s 2026-03-18 03:34:13.855204 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.35s 2026-03-18 03:34:13.855216 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.34s 2026-03-18 03:34:13.855226 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.99s 2026-03-18 03:34:13.855234 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.76s 2026-03-18 03:34:13.855243 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.76s 2026-03-18 03:34:13.855252 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.68s 2026-03-18 03:34:13.855260 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.43s 2026-03-18 03:34:13.855269 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.43s 2026-03-18 03:34:13.855277 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.40s 2026-03-18 03:34:13.855286 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.34s 2026-03-18 03:34:13.855295 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.33s 2026-03-18 03:34:13.855304 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2026-03-18 03:34:16.682705 | orchestrator | 2026-03-18 03:34:16 | INFO  | Task 383d9a9c-22d5-466b-9407-38e33dbfbb11 (prometheus) was prepared for execution. 2026-03-18 03:34:16.682791 | orchestrator | 2026-03-18 03:34:16 | INFO  | It takes a moment until task 383d9a9c-22d5-466b-9407-38e33dbfbb11 (prometheus) has been started and output is visible here. 2026-03-18 03:34:26.988571 | orchestrator | 2026-03-18 03:34:26.988681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:34:26.988698 | orchestrator | 2026-03-18 03:34:26.988710 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:34:26.988722 | orchestrator | Wednesday 18 March 2026 03:34:21 +0000 (0:00:00.340) 0:00:00.340 ******* 2026-03-18 03:34:26.988733 | orchestrator | ok: [testbed-manager] 2026-03-18 03:34:26.988745 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:34:26.988756 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:34:26.988793 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:34:26.988805 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:34:26.988816 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:34:26.988826 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:34:26.988837 | orchestrator | 2026-03-18 03:34:26.988849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:34:26.988877 | orchestrator | Wednesday 18 March 2026 03:34:22 +0000 (0:00:00.961) 0:00:01.302 ******* 2026-03-18 03:34:26.988889 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988900 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988911 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988922 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988932 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988943 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988953 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-18 03:34:26.988964 | orchestrator | 2026-03-18 03:34:26.988980 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-18 03:34:26.988999 | orchestrator | 2026-03-18 03:34:26.989017 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-18 03:34:26.989035 | orchestrator | Wednesday 18 March 2026 03:34:23 +0000 (0:00:00.948) 0:00:02.251 ******* 2026-03-18 03:34:26.989052 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:34:26.989071 | orchestrator | 2026-03-18 03:34:26.989091 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-18 03:34:26.989113 | orchestrator | Wednesday 18 March 2026 03:34:24 +0000 (0:00:01.556) 0:00:03.807 ******* 2026-03-18 03:34:26.989139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 03:34:26.989194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:26.989294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:26.989345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:26.989365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:26.989397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:26.989443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:27.911635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:27.911757 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:27.911774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:27.911783 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911857 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 03:34:27.911867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:27.911896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:27.911909 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585488 | orchestrator | 2026-03-18 03:34:33.585497 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-18 03:34:33.585506 | orchestrator | Wednesday 18 March 2026 03:34:27 +0000 (0:00:02.956) 0:00:06.764 ******* 2026-03-18 03:34:33.585514 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 03:34:33.585522 | orchestrator | 2026-03-18 03:34:33.585529 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-18 03:34:33.585535 | orchestrator | Wednesday 18 March 2026 03:34:29 +0000 (0:00:01.830) 0:00:08.595 ******* 2026-03-18 03:34:33.585543 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 03:34:33.585572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585632 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585639 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:33.585647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:33.585673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:33.585690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728106 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:35.728151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:35.728158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:35.728166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728220 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 03:34:35.728230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:35.728256 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:35.728263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:35.728277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:36.759083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:36.759202 | orchestrator | 2026-03-18 03:34:36.759228 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-18 03:34:36.759246 | orchestrator | Wednesday 18 March 2026 03:34:35 +0000 (0:00:05.980) 0:00:14.575 ******* 2026-03-18 03:34:36.759292 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-18 03:34:36.759364 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:36.759382 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:36.759454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-18 03:34:36.759498 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:36.759517 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:34:36.759533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:36.759564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:36.759579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:36.759596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:36.759611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:36.759627 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:34:36.759642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:36.759656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:36.759673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:37.388029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:37.388222 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:34:37.388242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:37.388260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:37.388278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:37.388299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:37.388436 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:34:37.388479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:37.388510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388548 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:34:37.388569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:37.388591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:37.388634 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:34:37.388665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:37.388699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:38.293018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:38.293147 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:34:38.293176 | orchestrator | 2026-03-18 03:34:38.293198 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-18 03:34:38.293218 | orchestrator | Wednesday 18 March 2026 03:34:37 +0000 (0:00:01.654) 0:00:16.230 ******* 2026-03-18 03:34:38.293240 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-18 03:34:38.293263 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:38.293279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:38.293383 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-18 03:34:38.293465 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:38.293488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:38.293507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:38.293526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:38.293543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:38.293562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:38.293579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:38.293622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:38.293657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:39.580200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:39.580422 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:34:39.580482 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:34:39.580503 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:34:39.580523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:39.580543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:39.580562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:39.580631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 03:34:39.580670 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:34:39.580714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:39.580734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580752 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580769 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:34:39.580783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:39.580798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:39.580843 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:34:39.580869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 03:34:39.580901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 03:34:43.549902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 03:34:43.549991 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:34:43.550008 | orchestrator | 2026-03-18 03:34:43.550109 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-18 03:34:43.550123 | orchestrator | Wednesday 18 March 2026 03:34:39 +0000 (0:00:02.188) 0:00:18.418 ******* 2026-03-18 03:34:43.550135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550148 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 03:34:43.550184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550262 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:43.550285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:34:43.550296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:43.550379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:43.550408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:43.550428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:43.550460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 03:34:45.795731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:34:45.795785 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:45.795845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:34:49.414786 | orchestrator | 2026-03-18 03:34:49.414860 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-18 03:34:49.414872 | orchestrator | Wednesday 18 March 2026 03:34:45 +0000 (0:00:06.225) 0:00:24.644 ******* 2026-03-18 03:34:49.414880 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 03:34:49.414889 | orchestrator | 2026-03-18 03:34:49.414897 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-18 03:34:49.414905 | orchestrator | Wednesday 18 March 2026 03:34:46 +0000 (0:00:00.828) 0:00:25.472 ******* 2026-03-18 03:34:49.414914 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.414944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.414953 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.414973 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.414981 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:34:49.414989 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415010 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415025 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415034 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415042 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415057 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1094313, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.543872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415074 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415094 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:49.415120 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.046857 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.046964 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.046991 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047020 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047032 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047044 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1094357, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5557485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:34:51.047056 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047105 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047118 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047129 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047146 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047157 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047169 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:51.047206 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604859 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604931 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604950 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604955 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604959 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604975 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604994 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.604998 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.605002 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1094296, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5431685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:34:52.605009 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.605013 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.605020 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.605024 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:52.605032 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242201 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242210 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242240 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242248 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242256 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242276 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242285 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242296 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242379 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242387 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242393 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242400 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:54.242414 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657526 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657613 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1094334, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.553264, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:34:56.657635 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657640 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657644 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657648 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657666 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657674 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657682 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657686 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657690 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657698 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:56.657705 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396659 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1094287, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5403578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:34:58.396758 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396767 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396772 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396784 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396789 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396815 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396821 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396827 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396832 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396837 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396843 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396848 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:34:58.396865 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116223 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116418 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116435 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116452 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1094316, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5445945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:00.116468 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116534 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116574 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116606 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116624 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:00.116641 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116657 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116682 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116706 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:00.116730 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:07.157848 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.157966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.157984 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.157996 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158008 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1094331, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5498664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:07.158140 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158245 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:07.158266 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158286 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158332 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:07.158352 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158386 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:07.158406 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-18 03:35:07.158424 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:07.158444 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1094321, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.545048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:07.158487 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1094309, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5437522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792383 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094352, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5547419, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792493 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094282, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1094372, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792522 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1094349, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5543106, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792558 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1094291, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5411885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792572 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1094285, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5389311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792598 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1094327, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5469942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792627 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1094324, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5457184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792639 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1094369, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-18 03:35:17.792651 | orchestrator | 2026-03-18 03:35:17.792664 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-18 03:35:17.792677 | orchestrator | Wednesday 18 March 2026 03:35:14 +0000 (0:00:28.165) 0:00:53.638 ******* 2026-03-18 03:35:17.792687 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 03:35:17.792699 | orchestrator | 2026-03-18 03:35:17.792711 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-18 03:35:17.792721 | orchestrator | Wednesday 18 March 2026 03:35:15 +0000 (0:00:00.805) 0:00:54.443 ******* 2026-03-18 03:35:17.792732 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.792744 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792764 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.792776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792787 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.792797 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.792806 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792816 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.792826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792837 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.792847 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.792858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792868 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.792879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792890 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.792902 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.792913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792924 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.792936 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.792947 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.792959 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.793054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793070 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.793081 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793093 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.793104 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.793116 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793127 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.793140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793152 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.793165 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:17.793177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793188 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-18 03:35:17.793207 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-18 03:35:17.793220 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-18 03:35:17.793231 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 03:35:17.793242 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:35:17.793252 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-18 03:35:17.793263 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-18 03:35:17.793274 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-18 03:35:17.793284 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-18 03:35:17.793347 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-18 03:35:17.793359 | orchestrator | 2026-03-18 03:35:17.793381 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-18 03:35:51.811793 | orchestrator | Wednesday 18 March 2026 03:35:17 +0000 (0:00:02.192) 0:00:56.636 ******* 2026-03-18 03:35:51.811872 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811882 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.811905 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811911 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.811918 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.811930 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811939 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.811948 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811957 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.811966 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-18 03:35:51.811975 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.811984 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-18 03:35:51.811994 | orchestrator | 2026-03-18 03:35:51.812004 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-18 03:35:51.812014 | orchestrator | Wednesday 18 March 2026 03:35:36 +0000 (0:00:18.914) 0:01:15.550 ******* 2026-03-18 03:35:51.812020 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812027 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812032 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812038 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812044 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812050 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812056 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812061 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812068 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812077 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812086 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-18 03:35:51.812095 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812105 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-18 03:35:51.812114 | orchestrator | 2026-03-18 03:35:51.812123 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-18 03:35:51.812133 | orchestrator | Wednesday 18 March 2026 03:35:39 +0000 (0:00:02.929) 0:01:18.480 ******* 2026-03-18 03:35:51.812143 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812153 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812160 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812166 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812172 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812178 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812187 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812196 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812205 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-18 03:35:51.812222 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812231 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812241 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-18 03:35:51.812250 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812260 | orchestrator | 2026-03-18 03:35:51.812331 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-18 03:35:51.812341 | orchestrator | Wednesday 18 March 2026 03:35:41 +0000 (0:00:02.093) 0:01:20.574 ******* 2026-03-18 03:35:51.812347 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 03:35:51.812353 | orchestrator | 2026-03-18 03:35:51.812359 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-18 03:35:51.812365 | orchestrator | Wednesday 18 March 2026 03:35:42 +0000 (0:00:00.829) 0:01:21.404 ******* 2026-03-18 03:35:51.812371 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:35:51.812377 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812383 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812388 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812409 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812415 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812421 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812427 | orchestrator | 2026-03-18 03:35:51.812432 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-18 03:35:51.812438 | orchestrator | Wednesday 18 March 2026 03:35:43 +0000 (0:00:00.781) 0:01:22.186 ******* 2026-03-18 03:35:51.812444 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:35:51.812450 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812455 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812461 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812467 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:35:51.812472 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:35:51.812478 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:35:51.812484 | orchestrator | 2026-03-18 03:35:51.812489 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-18 03:35:51.812495 | orchestrator | Wednesday 18 March 2026 03:35:45 +0000 (0:00:02.274) 0:01:24.460 ******* 2026-03-18 03:35:51.812501 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812507 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812513 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:35:51.812519 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812524 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812530 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812536 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812541 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812547 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812552 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812558 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812564 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812570 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-18 03:35:51.812575 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812581 | orchestrator | 2026-03-18 03:35:51.812587 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-18 03:35:51.812593 | orchestrator | Wednesday 18 March 2026 03:35:47 +0000 (0:00:01.826) 0:01:26.287 ******* 2026-03-18 03:35:51.812604 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812610 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812616 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812621 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812627 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812633 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812638 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812645 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812650 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812656 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812662 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-18 03:35:51.812667 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812673 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-18 03:35:51.812679 | orchestrator | 2026-03-18 03:35:51.812685 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-18 03:35:51.812690 | orchestrator | Wednesday 18 March 2026 03:35:49 +0000 (0:00:01.606) 0:01:27.893 ******* 2026-03-18 03:35:51.812696 | orchestrator | [WARNING]: Skipped 2026-03-18 03:35:51.812703 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-18 03:35:51.812709 | orchestrator | due to this access issue: 2026-03-18 03:35:51.812715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-18 03:35:51.812721 | orchestrator | not a directory 2026-03-18 03:35:51.812726 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 03:35:51.812732 | orchestrator | 2026-03-18 03:35:51.812738 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-18 03:35:51.812744 | orchestrator | Wednesday 18 March 2026 03:35:50 +0000 (0:00:01.207) 0:01:29.101 ******* 2026-03-18 03:35:51.812753 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:35:51.812759 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812764 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812770 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:51.812776 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:51.812782 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:51.812787 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:51.812793 | orchestrator | 2026-03-18 03:35:51.812799 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-18 03:35:51.812805 | orchestrator | Wednesday 18 March 2026 03:35:51 +0000 (0:00:01.063) 0:01:30.164 ******* 2026-03-18 03:35:51.812811 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:35:51.812816 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:35:51.812822 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:35:51.812831 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:35:54.809796 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:35:54.809902 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:35:54.809921 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:35:54.809934 | orchestrator | 2026-03-18 03:35:54.809949 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-18 03:35:54.809963 | orchestrator | Wednesday 18 March 2026 03:35:52 +0000 (0:00:00.985) 0:01:31.149 ******* 2026-03-18 03:35:54.809980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810137 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-18 03:35:54.810152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:54.810219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:54.810241 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-18 03:35:54.810266 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:54.810331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:54.810347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:54.810367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:54.810393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.848748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.848859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.848876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.848890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.848902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.848934 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-18 03:35:58.848967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.849003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.849015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.849027 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.849038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-18 03:35:58.849049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.849072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.849096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 03:35:58.849137 | orchestrator | 2026-03-18 03:35:58.849157 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-18 03:35:58.849186 | orchestrator | Wednesday 18 March 2026 03:35:56 +0000 (0:00:04.481) 0:01:35.630 ******* 2026-03-18 03:35:58.849219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-18 03:37:42.649042 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:37:42.649188 | orchestrator | 2026-03-18 03:37:42.649234 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649295 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:01.307) 0:01:36.938 ******* 2026-03-18 03:37:42.649307 | orchestrator | 2026-03-18 03:37:42.649319 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649330 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.286) 0:01:37.224 ******* 2026-03-18 03:37:42.649341 | orchestrator | 2026-03-18 03:37:42.649352 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649363 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.073) 0:01:37.297 ******* 2026-03-18 03:37:42.649374 | orchestrator | 2026-03-18 03:37:42.649385 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649396 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.080) 0:01:37.378 ******* 2026-03-18 03:37:42.649407 | orchestrator | 2026-03-18 03:37:42.649418 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649429 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.079) 0:01:37.457 ******* 2026-03-18 03:37:42.649439 | orchestrator | 2026-03-18 03:37:42.649450 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649461 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.079) 0:01:37.536 ******* 2026-03-18 03:37:42.649472 | orchestrator | 2026-03-18 03:37:42.649482 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-18 03:37:42.649493 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.071) 0:01:37.608 ******* 2026-03-18 03:37:42.649504 | orchestrator | 2026-03-18 03:37:42.649515 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-18 03:37:42.649526 | orchestrator | Wednesday 18 March 2026 03:35:58 +0000 (0:00:00.093) 0:01:37.701 ******* 2026-03-18 03:37:42.649536 | orchestrator | changed: [testbed-manager] 2026-03-18 03:37:42.649548 | orchestrator | 2026-03-18 03:37:42.649560 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-18 03:37:42.649573 | orchestrator | Wednesday 18 March 2026 03:36:27 +0000 (0:00:28.267) 0:02:05.969 ******* 2026-03-18 03:37:42.649585 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:37:42.649598 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:37:42.649610 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:37:42.649622 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:37:42.649635 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:37:42.649647 | orchestrator | changed: [testbed-manager] 2026-03-18 03:37:42.649660 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:37:42.649671 | orchestrator | 2026-03-18 03:37:42.649684 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-18 03:37:42.649697 | orchestrator | Wednesday 18 March 2026 03:36:40 +0000 (0:00:13.199) 0:02:19.169 ******* 2026-03-18 03:37:42.649709 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:37:42.649721 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:37:42.649734 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:37:42.649747 | orchestrator | 2026-03-18 03:37:42.649787 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-18 03:37:42.649799 | orchestrator | Wednesday 18 March 2026 03:36:46 +0000 (0:00:05.945) 0:02:25.114 ******* 2026-03-18 03:37:42.649810 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:37:42.649821 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:37:42.649831 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:37:42.649842 | orchestrator | 2026-03-18 03:37:42.649852 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-18 03:37:42.649863 | orchestrator | Wednesday 18 March 2026 03:36:52 +0000 (0:00:05.898) 0:02:31.012 ******* 2026-03-18 03:37:42.649873 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:37:42.649884 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:37:42.649895 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:37:42.649906 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:37:42.649916 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:37:42.649927 | orchestrator | changed: [testbed-manager] 2026-03-18 03:37:42.649937 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:37:42.649948 | orchestrator | 2026-03-18 03:37:42.649959 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-18 03:37:42.649969 | orchestrator | Wednesday 18 March 2026 03:37:06 +0000 (0:00:14.203) 0:02:45.216 ******* 2026-03-18 03:37:42.649980 | orchestrator | changed: [testbed-manager] 2026-03-18 03:37:42.649991 | orchestrator | 2026-03-18 03:37:42.650002 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-18 03:37:42.650083 | orchestrator | Wednesday 18 March 2026 03:37:15 +0000 (0:00:09.028) 0:02:54.245 ******* 2026-03-18 03:37:42.650097 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:37:42.650107 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:37:42.650118 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:37:42.650129 | orchestrator | 2026-03-18 03:37:42.650157 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-18 03:37:42.650168 | orchestrator | Wednesday 18 March 2026 03:37:26 +0000 (0:00:10.873) 0:03:05.118 ******* 2026-03-18 03:37:42.650179 | orchestrator | changed: [testbed-manager] 2026-03-18 03:37:42.650189 | orchestrator | 2026-03-18 03:37:42.650200 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-18 03:37:42.650211 | orchestrator | Wednesday 18 March 2026 03:37:31 +0000 (0:00:05.644) 0:03:10.763 ******* 2026-03-18 03:37:42.650222 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:37:42.650233 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:37:42.650295 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:37:42.650307 | orchestrator | 2026-03-18 03:37:42.650318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:37:42.650331 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-18 03:37:42.650366 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-18 03:37:42.650378 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-18 03:37:42.650389 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-18 03:37:42.650400 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 03:37:42.650411 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 03:37:42.650422 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-18 03:37:42.650444 | orchestrator | 2026-03-18 03:37:42.650455 | orchestrator | 2026-03-18 03:37:42.650466 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:37:42.650477 | orchestrator | Wednesday 18 March 2026 03:37:42 +0000 (0:00:10.142) 0:03:20.905 ******* 2026-03-18 03:37:42.650488 | orchestrator | =============================================================================== 2026-03-18 03:37:42.650499 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 28.27s 2026-03-18 03:37:42.650510 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 28.17s 2026-03-18 03:37:42.650520 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.91s 2026-03-18 03:37:42.650531 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.20s 2026-03-18 03:37:42.650542 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.20s 2026-03-18 03:37:42.650553 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.87s 2026-03-18 03:37:42.650564 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.14s 2026-03-18 03:37:42.650574 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.03s 2026-03-18 03:37:42.650585 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.23s 2026-03-18 03:37:42.650596 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.98s 2026-03-18 03:37:42.650607 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.95s 2026-03-18 03:37:42.650618 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.90s 2026-03-18 03:37:42.650629 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.64s 2026-03-18 03:37:42.650639 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.48s 2026-03-18 03:37:42.650650 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.96s 2026-03-18 03:37:42.650661 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.93s 2026-03-18 03:37:42.650672 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.27s 2026-03-18 03:37:42.650682 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.19s 2026-03-18 03:37:42.650693 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.19s 2026-03-18 03:37:42.650704 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.09s 2026-03-18 03:37:46.914121 | orchestrator | 2026-03-18 03:37:46 | INFO  | Task 10c4c1db-16e1-4d3c-b79f-72c79b67025e (grafana) was prepared for execution. 2026-03-18 03:37:46.914234 | orchestrator | 2026-03-18 03:37:46 | INFO  | It takes a moment until task 10c4c1db-16e1-4d3c-b79f-72c79b67025e (grafana) has been started and output is visible here. 2026-03-18 03:37:57.728573 | orchestrator | 2026-03-18 03:37:57.728681 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:37:57.728696 | orchestrator | 2026-03-18 03:37:57.728708 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:37:57.728719 | orchestrator | Wednesday 18 March 2026 03:37:51 +0000 (0:00:00.295) 0:00:00.295 ******* 2026-03-18 03:37:57.728748 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:37:57.728757 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:37:57.728764 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:37:57.728770 | orchestrator | 2026-03-18 03:37:57.728777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:37:57.728783 | orchestrator | Wednesday 18 March 2026 03:37:52 +0000 (0:00:00.343) 0:00:00.638 ******* 2026-03-18 03:37:57.728789 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-18 03:37:57.728796 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-18 03:37:57.728803 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-18 03:37:57.728830 | orchestrator | 2026-03-18 03:37:57.728837 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-18 03:37:57.728843 | orchestrator | 2026-03-18 03:37:57.728849 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-18 03:37:57.728855 | orchestrator | Wednesday 18 March 2026 03:37:52 +0000 (0:00:00.513) 0:00:01.152 ******* 2026-03-18 03:37:57.728862 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:37:57.728869 | orchestrator | 2026-03-18 03:37:57.728875 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-18 03:37:57.728881 | orchestrator | Wednesday 18 March 2026 03:37:53 +0000 (0:00:00.662) 0:00:01.815 ******* 2026-03-18 03:37:57.728891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.728901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.728907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.728914 | orchestrator | 2026-03-18 03:37:57.728920 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-18 03:37:57.728926 | orchestrator | Wednesday 18 March 2026 03:37:54 +0000 (0:00:00.934) 0:00:02.749 ******* 2026-03-18 03:37:57.728933 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-18 03:37:57.728939 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-18 03:37:57.728948 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:37:57.728959 | orchestrator | 2026-03-18 03:37:57.728969 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-18 03:37:57.728978 | orchestrator | Wednesday 18 March 2026 03:37:55 +0000 (0:00:00.939) 0:00:03.689 ******* 2026-03-18 03:37:57.728988 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:37:57.728998 | orchestrator | 2026-03-18 03:37:57.729008 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-18 03:37:57.729018 | orchestrator | Wednesday 18 March 2026 03:37:55 +0000 (0:00:00.578) 0:00:04.267 ******* 2026-03-18 03:37:57.729060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.729074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.729085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:37:57.729096 | orchestrator | 2026-03-18 03:37:57.729108 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-18 03:37:57.729118 | orchestrator | Wednesday 18 March 2026 03:37:57 +0000 (0:00:01.385) 0:00:05.653 ******* 2026-03-18 03:37:57.729128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:37:57.729138 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:37:57.729150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:37:57.729160 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:37:57.729181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:38:04.877608 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:38:04.877686 | orchestrator | 2026-03-18 03:38:04.877694 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-18 03:38:04.877701 | orchestrator | Wednesday 18 March 2026 03:37:57 +0000 (0:00:00.660) 0:00:06.314 ******* 2026-03-18 03:38:04.877708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:38:04.877715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:38:04.877720 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:38:04.877725 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:38:04.877730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-18 03:38:04.877736 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:38:04.877741 | orchestrator | 2026-03-18 03:38:04.877746 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-18 03:38:04.877751 | orchestrator | Wednesday 18 March 2026 03:37:58 +0000 (0:00:00.617) 0:00:06.931 ******* 2026-03-18 03:38:04.877756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877804 | orchestrator | 2026-03-18 03:38:04.877831 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-18 03:38:04.877836 | orchestrator | Wednesday 18 March 2026 03:37:59 +0000 (0:00:01.327) 0:00:08.258 ******* 2026-03-18 03:38:04.877841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:38:04.877856 | orchestrator | 2026-03-18 03:38:04.877861 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-18 03:38:04.877870 | orchestrator | Wednesday 18 March 2026 03:38:01 +0000 (0:00:01.681) 0:00:09.940 ******* 2026-03-18 03:38:04.877875 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:38:04.877880 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:38:04.877885 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:38:04.877890 | orchestrator | 2026-03-18 03:38:04.877895 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-18 03:38:04.877900 | orchestrator | Wednesday 18 March 2026 03:38:01 +0000 (0:00:00.342) 0:00:10.282 ******* 2026-03-18 03:38:04.877905 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-18 03:38:04.877910 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-18 03:38:04.877915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-18 03:38:04.877920 | orchestrator | 2026-03-18 03:38:04.877924 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-18 03:38:04.877929 | orchestrator | Wednesday 18 March 2026 03:38:02 +0000 (0:00:01.310) 0:00:11.592 ******* 2026-03-18 03:38:04.877934 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-18 03:38:04.877940 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-18 03:38:04.877945 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-18 03:38:04.877950 | orchestrator | 2026-03-18 03:38:04.877955 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-18 03:38:04.877967 | orchestrator | Wednesday 18 March 2026 03:38:04 +0000 (0:00:01.863) 0:00:13.456 ******* 2026-03-18 03:38:11.669624 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:38:11.669713 | orchestrator | 2026-03-18 03:38:11.669724 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-18 03:38:11.669735 | orchestrator | Wednesday 18 March 2026 03:38:05 +0000 (0:00:00.824) 0:00:14.280 ******* 2026-03-18 03:38:11.669743 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-18 03:38:11.669753 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-18 03:38:11.669761 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:38:11.669770 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:38:11.669777 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:38:11.669786 | orchestrator | 2026-03-18 03:38:11.669794 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-18 03:38:11.669802 | orchestrator | Wednesday 18 March 2026 03:38:06 +0000 (0:00:00.743) 0:00:15.024 ******* 2026-03-18 03:38:11.669810 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:38:11.669818 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:38:11.669825 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:38:11.669833 | orchestrator | 2026-03-18 03:38:11.669841 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-18 03:38:11.669849 | orchestrator | Wednesday 18 March 2026 03:38:06 +0000 (0:00:00.357) 0:00:15.381 ******* 2026-03-18 03:38:11.669860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094019, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4708805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094019, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4708805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1094019, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4708805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094125, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4961395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094125, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4961395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1094125, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4961395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094037, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094037, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1094037, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.669996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094126, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4998655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.670004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094126, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4998655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:11.670095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1094126, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4998655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1094075, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094105, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4932446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094105, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4932446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.378959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1094105, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4932446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094016, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4690735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094016, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4690735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1094016, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4690735, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094027, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4720035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094027, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4720035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1094027, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4720035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:15.379084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094039, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.133476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094039, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1094039, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4747558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094082, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4815507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094082, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4815507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1094082, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4815507, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4956324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4956324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1094119, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4956324, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094032, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4734104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094032, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4734104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1094032, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4734104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094103, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4878652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:19.134488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094103, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4878652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1094103, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4878652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094078, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.480706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094078, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.480706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1094078, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.480706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094059, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094059, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1094059, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4792066, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094053, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4775233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094053, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4775233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1094053, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4775233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094100, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4874568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094100, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4874568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:23.443593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1094100, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4874568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094042, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.476384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094042, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.476384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1094042, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.476384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094118, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4943159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094118, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4943159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1094118, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.4943159, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094258, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.536925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094258, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.536925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1094258, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.536925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094158, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094158, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1094158, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:27.318490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5036423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5036423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1094147, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5036423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094177, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5148656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094177, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5148656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1094177, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5148656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5014758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5014758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1094139, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5014758, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094216, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094216, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1094216, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527529, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094179, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5217812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:31.249765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094179, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5217812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1094179, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5217812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094222, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094222, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1094222, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.527937, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5345545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5345545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1094249, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5345545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094212, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5260198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094212, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5260198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1094212, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5260198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.512594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.512594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:35.462590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1094172, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.512594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.259984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094155, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5058656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.260994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094155, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5058656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1094155, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5058656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094170, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094170, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1094170, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5111477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094150, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5056527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094150, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5056527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1094150, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5056527, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5147974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5147974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1094174, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5147974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:39.261166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094237, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.534088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094237, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.534088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1094237, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.534088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094230, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.530166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094230, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.530166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1094230, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.530166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5019026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5019026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1094141, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5019026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094143, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5032964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094143, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5032964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1094143, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5032964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094202, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5247543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:38:43.289814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094202, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5247543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:40:22.688998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1094202, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5247543, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:40:22.689117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094226, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5286791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:40:22.689167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094226, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5286791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:40:22.689251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1094226, 'dev': 114, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1773797704.5286791, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-18 03:40:22.689267 | orchestrator | 2026-03-18 03:40:22.689282 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-18 03:40:22.689294 | orchestrator | Wednesday 18 March 2026 03:38:44 +0000 (0:00:37.803) 0:00:53.184 ******* 2026-03-18 03:40:22.689306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:40:22.689337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:40:22.689373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-18 03:40:22.689385 | orchestrator | 2026-03-18 03:40:22.689397 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-18 03:40:22.689408 | orchestrator | Wednesday 18 March 2026 03:38:45 +0000 (0:00:01.209) 0:00:54.394 ******* 2026-03-18 03:40:22.689419 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:40:22.689431 | orchestrator | 2026-03-18 03:40:22.689442 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-18 03:40:22.689453 | orchestrator | Wednesday 18 March 2026 03:38:48 +0000 (0:00:02.246) 0:00:56.640 ******* 2026-03-18 03:40:22.689463 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:40:22.689474 | orchestrator | 2026-03-18 03:40:22.689485 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-18 03:40:22.689496 | orchestrator | Wednesday 18 March 2026 03:38:50 +0000 (0:00:02.338) 0:00:58.979 ******* 2026-03-18 03:40:22.689507 | orchestrator | 2026-03-18 03:40:22.689525 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-18 03:40:22.689538 | orchestrator | Wednesday 18 March 2026 03:38:50 +0000 (0:00:00.073) 0:00:59.053 ******* 2026-03-18 03:40:22.689551 | orchestrator | 2026-03-18 03:40:22.689563 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-18 03:40:22.689575 | orchestrator | Wednesday 18 March 2026 03:38:50 +0000 (0:00:00.079) 0:00:59.132 ******* 2026-03-18 03:40:22.689588 | orchestrator | 2026-03-18 03:40:22.689600 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-18 03:40:22.689613 | orchestrator | Wednesday 18 March 2026 03:38:50 +0000 (0:00:00.074) 0:00:59.207 ******* 2026-03-18 03:40:22.689625 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:40:22.689638 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:40:22.689651 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:40:22.689664 | orchestrator | 2026-03-18 03:40:22.689676 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-18 03:40:22.689687 | orchestrator | Wednesday 18 March 2026 03:38:52 +0000 (0:00:02.238) 0:01:01.445 ******* 2026-03-18 03:40:22.689697 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:40:22.689709 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:40:22.689720 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-18 03:40:22.689732 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-18 03:40:22.689743 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-18 03:40:22.689754 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-18 03:40:22.689772 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:40:22.689784 | orchestrator | 2026-03-18 03:40:22.689795 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-18 03:40:22.689806 | orchestrator | Wednesday 18 March 2026 03:39:43 +0000 (0:00:50.574) 0:01:52.020 ******* 2026-03-18 03:40:22.689817 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:40:22.689828 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:40:22.689838 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:40:22.689849 | orchestrator | 2026-03-18 03:40:22.689860 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-18 03:40:22.689870 | orchestrator | Wednesday 18 March 2026 03:40:17 +0000 (0:00:34.077) 0:02:26.097 ******* 2026-03-18 03:40:22.689881 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:40:22.689892 | orchestrator | 2026-03-18 03:40:22.689903 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-18 03:40:22.689913 | orchestrator | Wednesday 18 March 2026 03:40:19 +0000 (0:00:02.264) 0:02:28.361 ******* 2026-03-18 03:40:22.689924 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:40:22.689934 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:40:22.689945 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:40:22.689956 | orchestrator | 2026-03-18 03:40:22.689967 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-18 03:40:22.689977 | orchestrator | Wednesday 18 March 2026 03:40:20 +0000 (0:00:00.340) 0:02:28.702 ******* 2026-03-18 03:40:22.689990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-18 03:40:22.690009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-18 03:40:23.409120 | orchestrator | 2026-03-18 03:40:23.409272 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-18 03:40:23.409294 | orchestrator | Wednesday 18 March 2026 03:40:22 +0000 (0:00:02.568) 0:02:31.270 ******* 2026-03-18 03:40:23.409315 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:40:23.409335 | orchestrator | 2026-03-18 03:40:23.409354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:40:23.409374 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:40:23.409395 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:40:23.409416 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-18 03:40:23.409437 | orchestrator | 2026-03-18 03:40:23.409458 | orchestrator | 2026-03-18 03:40:23.409478 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:40:23.409496 | orchestrator | Wednesday 18 March 2026 03:40:22 +0000 (0:00:00.305) 0:02:31.576 ******* 2026-03-18 03:40:23.409514 | orchestrator | =============================================================================== 2026-03-18 03:40:23.409532 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.57s 2026-03-18 03:40:23.409548 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.80s 2026-03-18 03:40:23.409588 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.08s 2026-03-18 03:40:23.409608 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.57s 2026-03-18 03:40:23.409656 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2026-03-18 03:40:23.409676 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.26s 2026-03-18 03:40:23.409694 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.25s 2026-03-18 03:40:23.409713 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.24s 2026-03-18 03:40:23.409732 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.86s 2026-03-18 03:40:23.409750 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.68s 2026-03-18 03:40:23.409768 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2026-03-18 03:40:23.409788 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.33s 2026-03-18 03:40:23.409806 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.31s 2026-03-18 03:40:23.409823 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.21s 2026-03-18 03:40:23.409843 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2026-03-18 03:40:23.409864 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.93s 2026-03-18 03:40:23.409882 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.82s 2026-03-18 03:40:23.409901 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-03-18 03:40:23.409920 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.66s 2026-03-18 03:40:23.409939 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.66s 2026-03-18 03:40:23.784506 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-18 03:40:23.793813 | orchestrator | + set -e 2026-03-18 03:40:23.793922 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 03:40:23.793948 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 03:40:23.793969 | orchestrator | ++ INTERACTIVE=false 2026-03-18 03:40:23.793987 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 03:40:23.794006 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 03:40:23.794094 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 03:40:23.794115 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 03:40:23.794134 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 03:40:23.794153 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 03:40:23.794172 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 03:40:23.794214 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 03:40:23.794231 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 03:40:23.794249 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:40:23.794267 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:40:23.794286 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 03:40:23.794326 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 03:40:23.794346 | orchestrator | ++ export ARA=false 2026-03-18 03:40:23.794365 | orchestrator | ++ ARA=false 2026-03-18 03:40:23.794383 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 03:40:23.794528 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 03:40:23.794559 | orchestrator | ++ export TEMPEST=false 2026-03-18 03:40:23.794579 | orchestrator | ++ TEMPEST=false 2026-03-18 03:40:23.794600 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 03:40:23.794620 | orchestrator | ++ IS_ZUUL=true 2026-03-18 03:40:23.794639 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:40:23.794659 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:40:23.794680 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 03:40:23.794701 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 03:40:23.794721 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 03:40:23.794734 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 03:40:23.794745 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 03:40:23.794756 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 03:40:23.794766 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 03:40:23.794777 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 03:40:23.794800 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-18 03:40:23.850007 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 03:40:23.850159 | orchestrator | + osism apply clusterapi 2026-03-18 03:40:26.187210 | orchestrator | 2026-03-18 03:40:26 | INFO  | Task 75fee7aa-5d1b-4ce0-a1d1-bb21d9099c39 (clusterapi) was prepared for execution. 2026-03-18 03:40:26.187926 | orchestrator | 2026-03-18 03:40:26 | INFO  | It takes a moment until task 75fee7aa-5d1b-4ce0-a1d1-bb21d9099c39 (clusterapi) has been started and output is visible here. 2026-03-18 03:41:30.855800 | orchestrator | 2026-03-18 03:41:30.855895 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-18 03:41:30.855907 | orchestrator | 2026-03-18 03:41:30.855916 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-18 03:41:30.855924 | orchestrator | Wednesday 18 March 2026 03:40:31 +0000 (0:00:00.201) 0:00:00.201 ******* 2026-03-18 03:41:30.855932 | orchestrator | included: cert_manager for testbed-manager 2026-03-18 03:41:30.855940 | orchestrator | 2026-03-18 03:41:30.855948 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-18 03:41:30.855955 | orchestrator | Wednesday 18 March 2026 03:40:31 +0000 (0:00:00.265) 0:00:00.467 ******* 2026-03-18 03:41:30.855965 | orchestrator | changed: [testbed-manager] 2026-03-18 03:41:30.855978 | orchestrator | 2026-03-18 03:41:30.855989 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-18 03:41:30.856001 | orchestrator | Wednesday 18 March 2026 03:40:36 +0000 (0:00:05.555) 0:00:06.022 ******* 2026-03-18 03:41:30.856013 | orchestrator | changed: [testbed-manager] 2026-03-18 03:41:30.856024 | orchestrator | 2026-03-18 03:41:30.856037 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-18 03:41:30.856049 | orchestrator | 2026-03-18 03:41:30.856062 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-18 03:41:30.856070 | orchestrator | Wednesday 18 March 2026 03:41:10 +0000 (0:00:33.382) 0:00:39.404 ******* 2026-03-18 03:41:30.856077 | orchestrator | ok: [testbed-manager] 2026-03-18 03:41:30.856085 | orchestrator | 2026-03-18 03:41:30.856092 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-18 03:41:30.856099 | orchestrator | Wednesday 18 March 2026 03:41:11 +0000 (0:00:01.189) 0:00:40.594 ******* 2026-03-18 03:41:30.856107 | orchestrator | ok: [testbed-manager] 2026-03-18 03:41:30.856114 | orchestrator | 2026-03-18 03:41:30.856121 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-18 03:41:30.856143 | orchestrator | Wednesday 18 March 2026 03:41:11 +0000 (0:00:00.162) 0:00:40.757 ******* 2026-03-18 03:41:30.856150 | orchestrator | ok: [testbed-manager] 2026-03-18 03:41:30.856251 | orchestrator | 2026-03-18 03:41:30.856264 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-18 03:41:30.856276 | orchestrator | Wednesday 18 March 2026 03:41:27 +0000 (0:00:16.364) 0:00:57.122 ******* 2026-03-18 03:41:30.856288 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:41:30.856300 | orchestrator | 2026-03-18 03:41:30.856312 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-18 03:41:30.856325 | orchestrator | Wednesday 18 March 2026 03:41:28 +0000 (0:00:00.151) 0:00:57.273 ******* 2026-03-18 03:41:30.856337 | orchestrator | changed: [testbed-manager] 2026-03-18 03:41:30.856348 | orchestrator | 2026-03-18 03:41:30.856355 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:41:30.856364 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 03:41:30.856372 | orchestrator | 2026-03-18 03:41:30.856379 | orchestrator | 2026-03-18 03:41:30.856387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:41:30.856394 | orchestrator | Wednesday 18 March 2026 03:41:30 +0000 (0:00:02.319) 0:00:59.593 ******* 2026-03-18 03:41:30.856401 | orchestrator | =============================================================================== 2026-03-18 03:41:30.856408 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 33.38s 2026-03-18 03:41:30.856415 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.36s 2026-03-18 03:41:30.856423 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.56s 2026-03-18 03:41:30.856449 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.32s 2026-03-18 03:41:30.856457 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.19s 2026-03-18 03:41:30.856464 | orchestrator | Include cert_manager role ----------------------------------------------- 0.27s 2026-03-18 03:41:30.856471 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-03-18 03:41:30.856478 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.15s 2026-03-18 03:41:31.266672 | orchestrator | + osism apply magnum 2026-03-18 03:41:33.494771 | orchestrator | 2026-03-18 03:41:33 | INFO  | Task d9de4796-edde-487d-a3fa-21d3ec411801 (magnum) was prepared for execution. 2026-03-18 03:41:33.494880 | orchestrator | 2026-03-18 03:41:33 | INFO  | It takes a moment until task d9de4796-edde-487d-a3fa-21d3ec411801 (magnum) has been started and output is visible here. 2026-03-18 03:42:17.038664 | orchestrator | 2026-03-18 03:42:17.038801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:42:17.038827 | orchestrator | 2026-03-18 03:42:17.038845 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:42:17.038863 | orchestrator | Wednesday 18 March 2026 03:41:38 +0000 (0:00:00.288) 0:00:00.288 ******* 2026-03-18 03:42:17.038880 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:42:17.038899 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:42:17.038917 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:42:17.038934 | orchestrator | 2026-03-18 03:42:17.038952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:42:17.038962 | orchestrator | Wednesday 18 March 2026 03:41:38 +0000 (0:00:00.348) 0:00:00.637 ******* 2026-03-18 03:42:17.038973 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-18 03:42:17.038984 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-18 03:42:17.038994 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-18 03:42:17.039004 | orchestrator | 2026-03-18 03:42:17.039014 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-18 03:42:17.039023 | orchestrator | 2026-03-18 03:42:17.039033 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-18 03:42:17.039043 | orchestrator | Wednesday 18 March 2026 03:41:38 +0000 (0:00:00.503) 0:00:01.141 ******* 2026-03-18 03:42:17.039053 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:42:17.039064 | orchestrator | 2026-03-18 03:42:17.039073 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-18 03:42:17.039083 | orchestrator | Wednesday 18 March 2026 03:41:39 +0000 (0:00:00.619) 0:00:01.761 ******* 2026-03-18 03:42:17.039093 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-18 03:42:17.039103 | orchestrator | 2026-03-18 03:42:17.039113 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-18 03:42:17.039122 | orchestrator | Wednesday 18 March 2026 03:41:43 +0000 (0:00:03.585) 0:00:05.347 ******* 2026-03-18 03:42:17.039132 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-18 03:42:17.039173 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-18 03:42:17.039185 | orchestrator | 2026-03-18 03:42:17.039196 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-18 03:42:17.039207 | orchestrator | Wednesday 18 March 2026 03:41:49 +0000 (0:00:06.546) 0:00:11.893 ******* 2026-03-18 03:42:17.039219 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-18 03:42:17.039230 | orchestrator | 2026-03-18 03:42:17.039241 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-18 03:42:17.039253 | orchestrator | Wednesday 18 March 2026 03:41:53 +0000 (0:00:03.512) 0:00:15.406 ******* 2026-03-18 03:42:17.039290 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-18 03:42:17.039302 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-18 03:42:17.039314 | orchestrator | 2026-03-18 03:42:17.039341 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-18 03:42:17.039353 | orchestrator | Wednesday 18 March 2026 03:41:57 +0000 (0:00:03.789) 0:00:19.195 ******* 2026-03-18 03:42:17.039363 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-18 03:42:17.039375 | orchestrator | 2026-03-18 03:42:17.039386 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-18 03:42:17.039397 | orchestrator | Wednesday 18 March 2026 03:42:00 +0000 (0:00:03.387) 0:00:22.582 ******* 2026-03-18 03:42:17.039408 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-18 03:42:17.039419 | orchestrator | 2026-03-18 03:42:17.039429 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-18 03:42:17.039440 | orchestrator | Wednesday 18 March 2026 03:42:04 +0000 (0:00:03.766) 0:00:26.349 ******* 2026-03-18 03:42:17.039451 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:42:17.039462 | orchestrator | 2026-03-18 03:42:17.039473 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-18 03:42:17.039484 | orchestrator | Wednesday 18 March 2026 03:42:07 +0000 (0:00:03.428) 0:00:29.778 ******* 2026-03-18 03:42:17.039495 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:42:17.039506 | orchestrator | 2026-03-18 03:42:17.039517 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-18 03:42:17.039528 | orchestrator | Wednesday 18 March 2026 03:42:11 +0000 (0:00:03.973) 0:00:33.751 ******* 2026-03-18 03:42:17.039538 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:42:17.039548 | orchestrator | 2026-03-18 03:42:17.039558 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-18 03:42:17.039568 | orchestrator | Wednesday 18 March 2026 03:42:15 +0000 (0:00:03.747) 0:00:37.499 ******* 2026-03-18 03:42:17.039601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:17.039616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:17.039627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:17.039650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:17.039662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:17.039679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:24.797902 | orchestrator | 2026-03-18 03:42:24.797999 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-18 03:42:24.798010 | orchestrator | Wednesday 18 March 2026 03:42:17 +0000 (0:00:01.695) 0:00:39.194 ******* 2026-03-18 03:42:24.798060 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:42:24.798069 | orchestrator | 2026-03-18 03:42:24.798076 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-18 03:42:24.798083 | orchestrator | Wednesday 18 March 2026 03:42:17 +0000 (0:00:00.164) 0:00:39.359 ******* 2026-03-18 03:42:24.798089 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:42:24.798096 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:42:24.798102 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:42:24.798109 | orchestrator | 2026-03-18 03:42:24.798116 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-18 03:42:24.798123 | orchestrator | Wednesday 18 March 2026 03:42:17 +0000 (0:00:00.351) 0:00:39.710 ******* 2026-03-18 03:42:24.798177 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 03:42:24.798185 | orchestrator | 2026-03-18 03:42:24.798191 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-18 03:42:24.798197 | orchestrator | Wednesday 18 March 2026 03:42:18 +0000 (0:00:00.908) 0:00:40.619 ******* 2026-03-18 03:42:24.798207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:24.798230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:24.798237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:24.798261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:24.798269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:24.798282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:24.798290 | orchestrator | 2026-03-18 03:42:24.798296 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-18 03:42:24.798303 | orchestrator | Wednesday 18 March 2026 03:42:20 +0000 (0:00:02.491) 0:00:43.111 ******* 2026-03-18 03:42:24.798309 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:42:24.798316 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:42:24.798326 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:42:24.798331 | orchestrator | 2026-03-18 03:42:24.798338 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-18 03:42:24.798344 | orchestrator | Wednesday 18 March 2026 03:42:21 +0000 (0:00:00.543) 0:00:43.654 ******* 2026-03-18 03:42:24.798351 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:42:24.798357 | orchestrator | 2026-03-18 03:42:24.798363 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-18 03:42:24.798369 | orchestrator | Wednesday 18 March 2026 03:42:22 +0000 (0:00:00.640) 0:00:44.294 ******* 2026-03-18 03:42:24.798375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:24.798387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:25.888683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:25.888777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:25.888807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:25.888816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:25.888824 | orchestrator | 2026-03-18 03:42:25.888833 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-18 03:42:25.888842 | orchestrator | Wednesday 18 March 2026 03:42:24 +0000 (0:00:02.668) 0:00:46.962 ******* 2026-03-18 03:42:25.888865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:25.888893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:25.888901 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:42:25.888913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:25.888921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:25.888928 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:42:25.888934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:25.888953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:29.769366 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:42:29.769497 | orchestrator | 2026-03-18 03:42:29.769527 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-18 03:42:29.769550 | orchestrator | Wednesday 18 March 2026 03:42:25 +0000 (0:00:01.082) 0:00:48.045 ******* 2026-03-18 03:42:29.769566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:29.769601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:29.769613 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:42:29.769625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:29.769637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:29.769671 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:42:29.769707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:29.769729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:29.769747 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:42:29.769765 | orchestrator | 2026-03-18 03:42:29.769784 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-18 03:42:29.769803 | orchestrator | Wednesday 18 March 2026 03:42:26 +0000 (0:00:00.981) 0:00:49.027 ******* 2026-03-18 03:42:29.769832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:29.769853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:29.769900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:36.521556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521762 | orchestrator | 2026-03-18 03:42:36.521783 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-18 03:42:36.521801 | orchestrator | Wednesday 18 March 2026 03:42:29 +0000 (0:00:02.905) 0:00:51.932 ******* 2026-03-18 03:42:36.521836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:36.521868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:36.521879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:36.521896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:36.521935 | orchestrator | 2026-03-18 03:42:36.521945 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-18 03:42:36.521955 | orchestrator | Wednesday 18 March 2026 03:42:35 +0000 (0:00:06.051) 0:00:57.984 ******* 2026-03-18 03:42:36.521973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:38.505973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:38.506115 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:42:38.506175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:38.506220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:38.506231 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:42:38.506240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-18 03:42:38.506264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 03:42:38.506273 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:42:38.506281 | orchestrator | 2026-03-18 03:42:38.506290 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-18 03:42:38.506299 | orchestrator | Wednesday 18 March 2026 03:42:36 +0000 (0:00:00.703) 0:00:58.687 ******* 2026-03-18 03:42:38.506308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:38.506322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:38.506337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-18 03:42:38.506346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:42:38.506366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:43:26.703205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-18 03:43:26.703313 | orchestrator | 2026-03-18 03:43:26.703342 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-18 03:43:26.703368 | orchestrator | Wednesday 18 March 2026 03:42:38 +0000 (0:00:01.975) 0:01:00.663 ******* 2026-03-18 03:43:26.703375 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:43:26.703382 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:43:26.703387 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:43:26.703393 | orchestrator | 2026-03-18 03:43:26.703399 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-18 03:43:26.703405 | orchestrator | Wednesday 18 March 2026 03:42:39 +0000 (0:00:00.576) 0:01:01.239 ******* 2026-03-18 03:43:26.703410 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:43:26.703416 | orchestrator | 2026-03-18 03:43:26.703422 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-18 03:43:26.703428 | orchestrator | Wednesday 18 March 2026 03:42:41 +0000 (0:00:02.206) 0:01:03.446 ******* 2026-03-18 03:43:26.703434 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:43:26.703440 | orchestrator | 2026-03-18 03:43:26.703446 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-18 03:43:26.703453 | orchestrator | Wednesday 18 March 2026 03:42:43 +0000 (0:00:02.332) 0:01:05.779 ******* 2026-03-18 03:43:26.703458 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:43:26.703464 | orchestrator | 2026-03-18 03:43:26.703470 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-18 03:43:26.703476 | orchestrator | Wednesday 18 March 2026 03:43:00 +0000 (0:00:16.581) 0:01:22.360 ******* 2026-03-18 03:43:26.703482 | orchestrator | 2026-03-18 03:43:26.703488 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-18 03:43:26.703495 | orchestrator | Wednesday 18 March 2026 03:43:00 +0000 (0:00:00.125) 0:01:22.486 ******* 2026-03-18 03:43:26.703501 | orchestrator | 2026-03-18 03:43:26.703507 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-18 03:43:26.703513 | orchestrator | Wednesday 18 March 2026 03:43:00 +0000 (0:00:00.082) 0:01:22.568 ******* 2026-03-18 03:43:26.703519 | orchestrator | 2026-03-18 03:43:26.703525 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-18 03:43:26.703531 | orchestrator | Wednesday 18 March 2026 03:43:00 +0000 (0:00:00.084) 0:01:22.652 ******* 2026-03-18 03:43:26.703538 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:43:26.703544 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:43:26.703550 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:43:26.703556 | orchestrator | 2026-03-18 03:43:26.703563 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-18 03:43:26.703569 | orchestrator | Wednesday 18 March 2026 03:43:15 +0000 (0:00:14.703) 0:01:37.356 ******* 2026-03-18 03:43:26.703575 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:43:26.703582 | orchestrator | changed: [testbed-node-1] 2026-03-18 03:43:26.703588 | orchestrator | changed: [testbed-node-2] 2026-03-18 03:43:26.703594 | orchestrator | 2026-03-18 03:43:26.703601 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:43:26.703608 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 03:43:26.703616 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:43:26.703622 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-18 03:43:26.703628 | orchestrator | 2026-03-18 03:43:26.703634 | orchestrator | 2026-03-18 03:43:26.703641 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:43:26.703647 | orchestrator | Wednesday 18 March 2026 03:43:26 +0000 (0:00:11.091) 0:01:48.447 ******* 2026-03-18 03:43:26.703653 | orchestrator | =============================================================================== 2026-03-18 03:43:26.703670 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.58s 2026-03-18 03:43:26.703676 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.70s 2026-03-18 03:43:26.703682 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.09s 2026-03-18 03:43:26.703689 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.55s 2026-03-18 03:43:26.703694 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.05s 2026-03-18 03:43:26.703701 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.97s 2026-03-18 03:43:26.703708 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2026-03-18 03:43:26.703735 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.77s 2026-03-18 03:43:26.703742 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.75s 2026-03-18 03:43:26.703749 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.59s 2026-03-18 03:43:26.703759 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2026-03-18 03:43:26.703769 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.43s 2026-03-18 03:43:26.703775 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.39s 2026-03-18 03:43:26.703781 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.91s 2026-03-18 03:43:26.703787 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.67s 2026-03-18 03:43:26.703793 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.49s 2026-03-18 03:43:26.703799 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.33s 2026-03-18 03:43:26.703813 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.21s 2026-03-18 03:43:26.703819 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.98s 2026-03-18 03:43:26.703825 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.70s 2026-03-18 03:43:27.468674 | orchestrator | ok: Runtime: 1:44:56.371972 2026-03-18 03:43:27.699104 | 2026-03-18 03:43:27.699285 | TASK [Deploy in a nutshell] 2026-03-18 03:43:28.232664 | orchestrator | skipping: Conditional result was False 2026-03-18 03:43:28.249418 | 2026-03-18 03:43:28.249571 | TASK [Bootstrap services] 2026-03-18 03:43:28.929219 | orchestrator | 2026-03-18 03:43:28.929417 | orchestrator | # BOOTSTRAP 2026-03-18 03:43:28.929458 | orchestrator | 2026-03-18 03:43:28.929474 | orchestrator | + set -e 2026-03-18 03:43:28.929498 | orchestrator | + echo 2026-03-18 03:43:28.929512 | orchestrator | + echo '# BOOTSTRAP' 2026-03-18 03:43:28.929530 | orchestrator | + echo 2026-03-18 03:43:28.929574 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-18 03:43:28.938913 | orchestrator | + set -e 2026-03-18 03:43:28.939002 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-18 03:43:31.477957 | orchestrator | 2026-03-18 03:43:31 | INFO  | It takes a moment until task d28290bd-93f1-4508-a5b1-d44b24d96600 (flavor-manager) has been started and output is visible here. 2026-03-18 03:43:39.915172 | orchestrator | 2026-03-18 03:43:35 | INFO  | Flavor SCS-1L-1 created 2026-03-18 03:43:39.915304 | orchestrator | 2026-03-18 03:43:35 | INFO  | Flavor SCS-1L-1-5 created 2026-03-18 03:43:39.915332 | orchestrator | 2026-03-18 03:43:35 | INFO  | Flavor SCS-1V-2 created 2026-03-18 03:43:39.915351 | orchestrator | 2026-03-18 03:43:35 | INFO  | Flavor SCS-1V-2-5 created 2026-03-18 03:43:39.915370 | orchestrator | 2026-03-18 03:43:35 | INFO  | Flavor SCS-1V-4 created 2026-03-18 03:43:39.915388 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-1V-4-10 created 2026-03-18 03:43:39.915406 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-1V-8 created 2026-03-18 03:43:39.915426 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-1V-8-20 created 2026-03-18 03:43:39.915465 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-2V-4 created 2026-03-18 03:43:39.915487 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-2V-4-10 created 2026-03-18 03:43:39.915507 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-2V-8 created 2026-03-18 03:43:39.915520 | orchestrator | 2026-03-18 03:43:36 | INFO  | Flavor SCS-2V-8-20 created 2026-03-18 03:43:39.915530 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-2V-16 created 2026-03-18 03:43:39.915542 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-2V-16-50 created 2026-03-18 03:43:39.915553 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-4V-8 created 2026-03-18 03:43:39.915564 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-4V-8-20 created 2026-03-18 03:43:39.915574 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-4V-16 created 2026-03-18 03:43:39.915585 | orchestrator | 2026-03-18 03:43:37 | INFO  | Flavor SCS-4V-16-50 created 2026-03-18 03:43:39.915596 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-4V-32 created 2026-03-18 03:43:39.915607 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-4V-32-100 created 2026-03-18 03:43:39.915618 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-8V-16 created 2026-03-18 03:43:39.915629 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-8V-16-50 created 2026-03-18 03:43:39.915640 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-8V-32 created 2026-03-18 03:43:39.915651 | orchestrator | 2026-03-18 03:43:38 | INFO  | Flavor SCS-8V-32-100 created 2026-03-18 03:43:39.915662 | orchestrator | 2026-03-18 03:43:39 | INFO  | Flavor SCS-16V-32 created 2026-03-18 03:43:39.915673 | orchestrator | 2026-03-18 03:43:39 | INFO  | Flavor SCS-16V-32-100 created 2026-03-18 03:43:39.915683 | orchestrator | 2026-03-18 03:43:39 | INFO  | Flavor SCS-2V-4-20s created 2026-03-18 03:43:39.915694 | orchestrator | 2026-03-18 03:43:39 | INFO  | Flavor SCS-4V-8-50s created 2026-03-18 03:43:39.915705 | orchestrator | 2026-03-18 03:43:39 | INFO  | Flavor SCS-8V-32-100s created 2026-03-18 03:43:42.926604 | orchestrator | 2026-03-18 03:43:42 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-18 03:43:53.193888 | orchestrator | 2026-03-18 03:43:53 | INFO  | Task 939983a9-2c28-4396-a3cc-e574071e868b (bootstrap-basic) was prepared for execution. 2026-03-18 03:43:53.193966 | orchestrator | 2026-03-18 03:43:53 | INFO  | It takes a moment until task 939983a9-2c28-4396-a3cc-e574071e868b (bootstrap-basic) has been started and output is visible here. 2026-03-18 03:44:40.679930 | orchestrator | 2026-03-18 03:44:40.680039 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-18 03:44:40.680054 | orchestrator | 2026-03-18 03:44:40.680065 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 03:44:40.680121 | orchestrator | Wednesday 18 March 2026 03:43:57 +0000 (0:00:00.084) 0:00:00.084 ******* 2026-03-18 03:44:40.680132 | orchestrator | ok: [localhost] 2026-03-18 03:44:40.680142 | orchestrator | 2026-03-18 03:44:40.680152 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-18 03:44:40.680162 | orchestrator | Wednesday 18 March 2026 03:43:59 +0000 (0:00:02.012) 0:00:02.097 ******* 2026-03-18 03:44:40.680172 | orchestrator | ok: [localhost] 2026-03-18 03:44:40.680182 | orchestrator | 2026-03-18 03:44:40.680192 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-18 03:44:40.680201 | orchestrator | Wednesday 18 March 2026 03:44:07 +0000 (0:00:07.974) 0:00:10.072 ******* 2026-03-18 03:44:40.680216 | orchestrator | changed: [localhost] 2026-03-18 03:44:40.680233 | orchestrator | 2026-03-18 03:44:40.680249 | orchestrator | TASK [Create public network] *************************************************** 2026-03-18 03:44:40.680265 | orchestrator | Wednesday 18 March 2026 03:44:14 +0000 (0:00:06.784) 0:00:16.857 ******* 2026-03-18 03:44:40.680282 | orchestrator | changed: [localhost] 2026-03-18 03:44:40.680299 | orchestrator | 2026-03-18 03:44:40.680316 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-18 03:44:40.680333 | orchestrator | Wednesday 18 March 2026 03:44:21 +0000 (0:00:06.273) 0:00:23.131 ******* 2026-03-18 03:44:40.680351 | orchestrator | changed: [localhost] 2026-03-18 03:44:40.680361 | orchestrator | 2026-03-18 03:44:40.680371 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-18 03:44:40.680380 | orchestrator | Wednesday 18 March 2026 03:44:27 +0000 (0:00:06.700) 0:00:29.831 ******* 2026-03-18 03:44:40.680390 | orchestrator | changed: [localhost] 2026-03-18 03:44:40.680400 | orchestrator | 2026-03-18 03:44:40.680409 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-18 03:44:40.680419 | orchestrator | Wednesday 18 March 2026 03:44:32 +0000 (0:00:04.695) 0:00:34.526 ******* 2026-03-18 03:44:40.680429 | orchestrator | changed: [localhost] 2026-03-18 03:44:40.680438 | orchestrator | 2026-03-18 03:44:40.680448 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-18 03:44:40.680466 | orchestrator | Wednesday 18 March 2026 03:44:36 +0000 (0:00:04.144) 0:00:38.671 ******* 2026-03-18 03:44:40.680478 | orchestrator | ok: [localhost] 2026-03-18 03:44:40.680488 | orchestrator | 2026-03-18 03:44:40.680499 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:44:40.680511 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 03:44:40.680522 | orchestrator | 2026-03-18 03:44:40.680534 | orchestrator | 2026-03-18 03:44:40.680545 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:44:40.680556 | orchestrator | Wednesday 18 March 2026 03:44:40 +0000 (0:00:03.806) 0:00:42.478 ******* 2026-03-18 03:44:40.680567 | orchestrator | =============================================================================== 2026-03-18 03:44:40.680578 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.97s 2026-03-18 03:44:40.680589 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.78s 2026-03-18 03:44:40.680600 | orchestrator | Set public network to default ------------------------------------------- 6.70s 2026-03-18 03:44:40.680611 | orchestrator | Create public network --------------------------------------------------- 6.27s 2026-03-18 03:44:40.680644 | orchestrator | Create public subnet ---------------------------------------------------- 4.70s 2026-03-18 03:44:40.680656 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.14s 2026-03-18 03:44:40.680668 | orchestrator | Create manager role ----------------------------------------------------- 3.81s 2026-03-18 03:44:40.680679 | orchestrator | Gathering Facts --------------------------------------------------------- 2.01s 2026-03-18 03:44:43.253091 | orchestrator | 2026-03-18 03:44:43 | INFO  | It takes a moment until task 37f330b2-b0d8-4075-9303-0cd43988121a (image-manager) has been started and output is visible here. 2026-03-18 03:45:27.725391 | orchestrator | 2026-03-18 03:44:46 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-18 03:45:27.725527 | orchestrator | 2026-03-18 03:44:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-18 03:45:27.725555 | orchestrator | 2026-03-18 03:44:46 | INFO  | Importing image Cirros 0.6.2 2026-03-18 03:45:27.725575 | orchestrator | 2026-03-18 03:44:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-18 03:45:27.725612 | orchestrator | 2026-03-18 03:44:48 | INFO  | Waiting for image to leave queued state... 2026-03-18 03:45:27.725634 | orchestrator | 2026-03-18 03:44:50 | INFO  | Waiting for import to complete... 2026-03-18 03:45:27.725653 | orchestrator | 2026-03-18 03:45:00 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-18 03:45:27.725672 | orchestrator | 2026-03-18 03:45:01 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-18 03:45:27.725690 | orchestrator | 2026-03-18 03:45:01 | INFO  | Setting internal_version = 0.6.2 2026-03-18 03:45:27.725707 | orchestrator | 2026-03-18 03:45:01 | INFO  | Setting image_original_user = cirros 2026-03-18 03:45:27.725725 | orchestrator | 2026-03-18 03:45:01 | INFO  | Adding tag os:cirros 2026-03-18 03:45:27.725743 | orchestrator | 2026-03-18 03:45:01 | INFO  | Setting property architecture: x86_64 2026-03-18 03:45:27.725761 | orchestrator | 2026-03-18 03:45:01 | INFO  | Setting property hw_disk_bus: scsi 2026-03-18 03:45:27.725778 | orchestrator | 2026-03-18 03:45:02 | INFO  | Setting property hw_rng_model: virtio 2026-03-18 03:45:27.725797 | orchestrator | 2026-03-18 03:45:02 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-18 03:45:27.725814 | orchestrator | 2026-03-18 03:45:02 | INFO  | Setting property hw_watchdog_action: reset 2026-03-18 03:45:27.725833 | orchestrator | 2026-03-18 03:45:02 | INFO  | Setting property hypervisor_type: qemu 2026-03-18 03:45:27.725852 | orchestrator | 2026-03-18 03:45:03 | INFO  | Setting property os_distro: cirros 2026-03-18 03:45:27.725869 | orchestrator | 2026-03-18 03:45:03 | INFO  | Setting property os_purpose: minimal 2026-03-18 03:45:27.725888 | orchestrator | 2026-03-18 03:45:03 | INFO  | Setting property replace_frequency: never 2026-03-18 03:45:27.725905 | orchestrator | 2026-03-18 03:45:04 | INFO  | Setting property uuid_validity: none 2026-03-18 03:45:27.725922 | orchestrator | 2026-03-18 03:45:04 | INFO  | Setting property provided_until: none 2026-03-18 03:45:27.725940 | orchestrator | 2026-03-18 03:45:05 | INFO  | Setting property image_description: Cirros 2026-03-18 03:45:27.725960 | orchestrator | 2026-03-18 03:45:05 | INFO  | Setting property image_name: Cirros 2026-03-18 03:45:27.725978 | orchestrator | 2026-03-18 03:45:05 | INFO  | Setting property internal_version: 0.6.2 2026-03-18 03:45:27.725996 | orchestrator | 2026-03-18 03:45:05 | INFO  | Setting property image_original_user: cirros 2026-03-18 03:45:27.726155 | orchestrator | 2026-03-18 03:45:06 | INFO  | Setting property os_version: 0.6.2 2026-03-18 03:45:27.726198 | orchestrator | 2026-03-18 03:45:06 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-18 03:45:27.726219 | orchestrator | 2026-03-18 03:45:06 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-18 03:45:27.726238 | orchestrator | 2026-03-18 03:45:07 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-18 03:45:27.726257 | orchestrator | 2026-03-18 03:45:07 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-18 03:45:27.726277 | orchestrator | 2026-03-18 03:45:07 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-18 03:45:27.726296 | orchestrator | 2026-03-18 03:45:07 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-18 03:45:27.726322 | orchestrator | 2026-03-18 03:45:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-18 03:45:27.726343 | orchestrator | 2026-03-18 03:45:07 | INFO  | Importing image Cirros 0.6.3 2026-03-18 03:45:27.726357 | orchestrator | 2026-03-18 03:45:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-18 03:45:27.726368 | orchestrator | 2026-03-18 03:45:09 | INFO  | Waiting for image to leave queued state... 2026-03-18 03:45:27.726379 | orchestrator | 2026-03-18 03:45:11 | INFO  | Waiting for import to complete... 2026-03-18 03:45:27.726412 | orchestrator | 2026-03-18 03:45:21 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-18 03:45:27.726424 | orchestrator | 2026-03-18 03:45:21 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-18 03:45:27.726434 | orchestrator | 2026-03-18 03:45:21 | INFO  | Setting internal_version = 0.6.3 2026-03-18 03:45:27.726444 | orchestrator | 2026-03-18 03:45:21 | INFO  | Setting image_original_user = cirros 2026-03-18 03:45:27.726455 | orchestrator | 2026-03-18 03:45:21 | INFO  | Adding tag os:cirros 2026-03-18 03:45:27.726466 | orchestrator | 2026-03-18 03:45:22 | INFO  | Setting property architecture: x86_64 2026-03-18 03:45:27.726476 | orchestrator | 2026-03-18 03:45:22 | INFO  | Setting property hw_disk_bus: scsi 2026-03-18 03:45:27.726487 | orchestrator | 2026-03-18 03:45:22 | INFO  | Setting property hw_rng_model: virtio 2026-03-18 03:45:27.726497 | orchestrator | 2026-03-18 03:45:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-18 03:45:27.726508 | orchestrator | 2026-03-18 03:45:23 | INFO  | Setting property hw_watchdog_action: reset 2026-03-18 03:45:27.726519 | orchestrator | 2026-03-18 03:45:23 | INFO  | Setting property hypervisor_type: qemu 2026-03-18 03:45:27.726530 | orchestrator | 2026-03-18 03:45:23 | INFO  | Setting property os_distro: cirros 2026-03-18 03:45:27.726540 | orchestrator | 2026-03-18 03:45:23 | INFO  | Setting property os_purpose: minimal 2026-03-18 03:45:27.726551 | orchestrator | 2026-03-18 03:45:24 | INFO  | Setting property replace_frequency: never 2026-03-18 03:45:27.726562 | orchestrator | 2026-03-18 03:45:24 | INFO  | Setting property uuid_validity: none 2026-03-18 03:45:27.726573 | orchestrator | 2026-03-18 03:45:24 | INFO  | Setting property provided_until: none 2026-03-18 03:45:27.726583 | orchestrator | 2026-03-18 03:45:24 | INFO  | Setting property image_description: Cirros 2026-03-18 03:45:27.726594 | orchestrator | 2026-03-18 03:45:25 | INFO  | Setting property image_name: Cirros 2026-03-18 03:45:27.726604 | orchestrator | 2026-03-18 03:45:25 | INFO  | Setting property internal_version: 0.6.3 2026-03-18 03:45:27.726625 | orchestrator | 2026-03-18 03:45:25 | INFO  | Setting property image_original_user: cirros 2026-03-18 03:45:27.726664 | orchestrator | 2026-03-18 03:45:25 | INFO  | Setting property os_version: 0.6.3 2026-03-18 03:45:27.726675 | orchestrator | 2026-03-18 03:45:26 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-18 03:45:27.726686 | orchestrator | 2026-03-18 03:45:26 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-18 03:45:27.726699 | orchestrator | 2026-03-18 03:45:26 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-18 03:45:27.726718 | orchestrator | 2026-03-18 03:45:26 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-18 03:45:27.726735 | orchestrator | 2026-03-18 03:45:26 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-18 03:45:28.075618 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-18 03:45:30.450796 | orchestrator | 2026-03-18 03:45:30 | INFO  | date: 2026-03-18 2026-03-18 03:45:30.450872 | orchestrator | 2026-03-18 03:45:30 | INFO  | image: octavia-amphora-haproxy-2024.2.20260318.qcow2 2026-03-18 03:45:30.450901 | orchestrator | 2026-03-18 03:45:30 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260318.qcow2 2026-03-18 03:45:30.450912 | orchestrator | 2026-03-18 03:45:30 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260318.qcow2.CHECKSUM 2026-03-18 03:45:30.621512 | orchestrator | 2026-03-18 03:45:30 | INFO  | checksum: a3bb6d30a4f2a979e575e81f78fc47652af6863f625ed2003b3f139414890c13 2026-03-18 03:45:30.705817 | orchestrator | 2026-03-18 03:45:30 | INFO  | It takes a moment until task bef635c4-b497-44fe-bfb7-fe075392f2e7 (image-manager) has been started and output is visible here. 2026-03-18 03:46:43.497571 | orchestrator | 2026-03-18 03:45:33 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-18' 2026-03-18 03:46:43.497683 | orchestrator | 2026-03-18 03:45:33 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260318.qcow2: 200 2026-03-18 03:46:43.497701 | orchestrator | 2026-03-18 03:45:33 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-18 2026-03-18 03:46:43.497713 | orchestrator | 2026-03-18 03:45:33 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260318.qcow2 2026-03-18 03:46:43.497725 | orchestrator | 2026-03-18 03:45:34 | INFO  | Waiting for image to leave queued state... 2026-03-18 03:46:43.497735 | orchestrator | 2026-03-18 03:45:36 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497747 | orchestrator | 2026-03-18 03:45:46 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497758 | orchestrator | 2026-03-18 03:45:56 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497769 | orchestrator | 2026-03-18 03:46:07 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497781 | orchestrator | 2026-03-18 03:46:17 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497794 | orchestrator | 2026-03-18 03:46:27 | INFO  | Waiting for import to complete... 2026-03-18 03:46:43.497805 | orchestrator | 2026-03-18 03:46:37 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-18' successfully completed, reloading images 2026-03-18 03:46:43.497817 | orchestrator | 2026-03-18 03:46:37 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-18' 2026-03-18 03:46:43.497851 | orchestrator | 2026-03-18 03:46:37 | INFO  | Setting internal_version = 2026-03-18 2026-03-18 03:46:43.497863 | orchestrator | 2026-03-18 03:46:37 | INFO  | Setting image_original_user = ubuntu 2026-03-18 03:46:43.497874 | orchestrator | 2026-03-18 03:46:37 | INFO  | Adding tag amphora 2026-03-18 03:46:43.497885 | orchestrator | 2026-03-18 03:46:38 | INFO  | Adding tag os:ubuntu 2026-03-18 03:46:43.497896 | orchestrator | 2026-03-18 03:46:38 | INFO  | Setting property architecture: x86_64 2026-03-18 03:46:43.497906 | orchestrator | 2026-03-18 03:46:38 | INFO  | Setting property hw_disk_bus: scsi 2026-03-18 03:46:43.497917 | orchestrator | 2026-03-18 03:46:38 | INFO  | Setting property hw_rng_model: virtio 2026-03-18 03:46:43.497928 | orchestrator | 2026-03-18 03:46:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-18 03:46:43.497939 | orchestrator | 2026-03-18 03:46:39 | INFO  | Setting property hw_watchdog_action: reset 2026-03-18 03:46:43.497949 | orchestrator | 2026-03-18 03:46:39 | INFO  | Setting property hypervisor_type: qemu 2026-03-18 03:46:43.497960 | orchestrator | 2026-03-18 03:46:39 | INFO  | Setting property os_distro: ubuntu 2026-03-18 03:46:43.497971 | orchestrator | 2026-03-18 03:46:40 | INFO  | Setting property replace_frequency: quarterly 2026-03-18 03:46:43.497981 | orchestrator | 2026-03-18 03:46:40 | INFO  | Setting property uuid_validity: last-1 2026-03-18 03:46:43.497992 | orchestrator | 2026-03-18 03:46:40 | INFO  | Setting property provided_until: none 2026-03-18 03:46:43.498002 | orchestrator | 2026-03-18 03:46:40 | INFO  | Setting property os_purpose: network 2026-03-18 03:46:43.498138 | orchestrator | 2026-03-18 03:46:41 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-18 03:46:43.498155 | orchestrator | 2026-03-18 03:46:41 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-18 03:46:43.498168 | orchestrator | 2026-03-18 03:46:41 | INFO  | Setting property internal_version: 2026-03-18 2026-03-18 03:46:43.498181 | orchestrator | 2026-03-18 03:46:41 | INFO  | Setting property image_original_user: ubuntu 2026-03-18 03:46:43.498193 | orchestrator | 2026-03-18 03:46:42 | INFO  | Setting property os_version: 2026-03-18 2026-03-18 03:46:43.498206 | orchestrator | 2026-03-18 03:46:42 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260318.qcow2 2026-03-18 03:46:43.498219 | orchestrator | 2026-03-18 03:46:42 | INFO  | Setting property image_build_date: 2026-03-18 2026-03-18 03:46:43.498231 | orchestrator | 2026-03-18 03:46:43 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-18' 2026-03-18 03:46:43.498243 | orchestrator | 2026-03-18 03:46:43 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-18' 2026-03-18 03:46:43.498272 | orchestrator | 2026-03-18 03:46:43 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-18 03:46:43.498284 | orchestrator | 2026-03-18 03:46:43 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-18 03:46:43.498297 | orchestrator | 2026-03-18 03:46:43 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-18 03:46:43.498307 | orchestrator | 2026-03-18 03:46:43 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-18 03:46:44.032720 | orchestrator | ok: Runtime: 0:03:15.340077 2026-03-18 03:46:44.053280 | 2026-03-18 03:46:44.053473 | TASK [Run checks] 2026-03-18 03:46:44.795315 | orchestrator | + set -e 2026-03-18 03:46:44.795443 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 03:46:44.795459 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 03:46:44.795471 | orchestrator | ++ INTERACTIVE=false 2026-03-18 03:46:44.795479 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 03:46:44.795486 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 03:46:44.795549 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-18 03:46:44.796251 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:46:44.801074 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:46:44.801111 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:46:44.801119 | orchestrator | + echo 2026-03-18 03:46:44.801168 | orchestrator | 2026-03-18 03:46:44.801177 | orchestrator | # CHECK 2026-03-18 03:46:44.801183 | orchestrator | 2026-03-18 03:46:44.801205 | orchestrator | + echo '# CHECK' 2026-03-18 03:46:44.801213 | orchestrator | + echo 2026-03-18 03:46:44.801222 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-18 03:46:44.802081 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-18 03:46:44.858447 | orchestrator | 2026-03-18 03:46:44.858518 | orchestrator | ## Containers @ testbed-manager 2026-03-18 03:46:44.858530 | orchestrator | 2026-03-18 03:46:44.858566 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-18 03:46:44.858578 | orchestrator | + echo 2026-03-18 03:46:44.858587 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-18 03:46:44.858597 | orchestrator | + echo 2026-03-18 03:46:44.858607 | orchestrator | + osism container testbed-manager ps 2026-03-18 03:46:47.007085 | orchestrator | 2026-03-18 03:46:47 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-18 03:46:47.381340 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-18 03:46:47.381438 | orchestrator | 430686425eda registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-18 03:46:47.381468 | orchestrator | 681ef79c0817 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-18 03:46:47.381477 | orchestrator | dd171c7b0d13 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-18 03:46:47.381481 | orchestrator | b8e12ea6f21a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-18 03:46:47.381485 | orchestrator | 2f45fa97f125 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-18 03:46:47.381493 | orchestrator | 29a388fb5472 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up 59 minutes cephclient 2026-03-18 03:46:47.381503 | orchestrator | 7276bd93aa73 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-18 03:46:47.381507 | orchestrator | 6f6aac484409 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-18 03:46:47.381528 | orchestrator | 7c57e41719a2 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-18 03:46:47.381535 | orchestrator | 273a950c0938 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-18 03:46:47.381542 | orchestrator | e3e29d4d9eaa phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-18 03:46:47.381548 | orchestrator | 20310cb78e5c registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-18 03:46:47.381555 | orchestrator | f40f3222475d registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-18 03:46:47.381562 | orchestrator | 218b3297f749 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-18 03:46:47.381585 | orchestrator | da1140d56b15 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-18 03:46:47.381592 | orchestrator | 2f014a68a94c registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-18 03:46:47.381599 | orchestrator | 306704a878af registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-18 03:46:47.381605 | orchestrator | 8981385af307 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-18 03:46:47.381611 | orchestrator | e9d7d99b96e5 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-18 03:46:47.381617 | orchestrator | 8382ae5c19f0 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-18 03:46:47.381622 | orchestrator | c82201edce73 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-18 03:46:47.381629 | orchestrator | 4f61bb78dd2d registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-18 03:46:47.381641 | orchestrator | 7530c99a10be registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-18 03:46:47.381647 | orchestrator | 5652509751c3 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-18 03:46:47.381653 | orchestrator | 14f3f25c3b1c registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-18 03:46:47.381659 | orchestrator | 5ee43578b333 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-18 03:46:47.381666 | orchestrator | afd09935160b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-18 03:46:47.381672 | orchestrator | fab3ca596642 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-18 03:46:47.381678 | orchestrator | c384daa7eab8 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-18 03:46:47.381682 | orchestrator | 55b1593326f1 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-18 03:46:47.725726 | orchestrator | 2026-03-18 03:46:47.725827 | orchestrator | ## Images @ testbed-manager 2026-03-18 03:46:47.725845 | orchestrator | 2026-03-18 03:46:47.725891 | orchestrator | + echo 2026-03-18 03:46:47.725906 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-18 03:46:47.725919 | orchestrator | + echo 2026-03-18 03:46:47.725936 | orchestrator | + osism container testbed-manager images 2026-03-18 03:46:50.251620 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-18 03:46:50.251697 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 838c3ef4bc31 24 hours ago 239MB 2026-03-18 03:46:50.251703 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 7 weeks ago 41.4MB 2026-03-18 03:46:50.251708 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-18 03:46:50.251713 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-18 03:46:50.251718 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-18 03:46:50.251722 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-18 03:46:50.251726 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-18 03:46:50.251730 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-18 03:46:50.252208 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-18 03:46:50.252243 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-18 03:46:50.252247 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-18 03:46:50.252251 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-18 03:46:50.252255 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-18 03:46:50.252259 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-18 03:46:50.252263 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-18 03:46:50.252267 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-18 03:46:50.252271 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-18 03:46:50.252274 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-18 03:46:50.252278 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-18 03:46:50.252282 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-18 03:46:50.252285 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 6 months ago 275MB 2026-03-18 03:46:50.252289 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 7 months ago 226MB 2026-03-18 03:46:50.252293 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-18 03:46:50.252296 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-18 03:46:50.252300 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-18 03:46:50.647097 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-18 03:46:50.647490 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-18 03:46:50.716075 | orchestrator | 2026-03-18 03:46:50.716149 | orchestrator | ## Containers @ testbed-node-0 2026-03-18 03:46:50.716160 | orchestrator | 2026-03-18 03:46:50.716194 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-18 03:46:50.716203 | orchestrator | + echo 2026-03-18 03:46:50.716211 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-18 03:46:50.716221 | orchestrator | + echo 2026-03-18 03:46:50.716229 | orchestrator | + osism container testbed-node-0 ps 2026-03-18 03:46:53.249778 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-18 03:46:53.249860 | orchestrator | 1e01ba195fb7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-18 03:46:53.249873 | orchestrator | e868fa7d3018 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-18 03:46:53.249883 | orchestrator | 5572aac2a2da registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-03-18 03:46:53.249891 | orchestrator | 10cb5c5239d3 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-18 03:46:53.249915 | orchestrator | 0d16331f1724 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-18 03:46:53.249924 | orchestrator | 7e9dac0f5849 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-18 03:46:53.249937 | orchestrator | 126405fbc56f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-18 03:46:53.249945 | orchestrator | e53d65d9c266 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-18 03:46:53.249953 | orchestrator | b3ee8afa01cb registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-18 03:46:53.249961 | orchestrator | 03941a6bb955 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-18 03:46:53.249969 | orchestrator | 5be1d395f147 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-18 03:46:53.249977 | orchestrator | e5662227957e registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-18 03:46:53.249989 | orchestrator | d9dd560beacc registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-18 03:46:53.250000 | orchestrator | bd3bff9842ec registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-18 03:46:53.250153 | orchestrator | daf41b515b8c registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-18 03:46:53.250170 | orchestrator | fb7252426bb6 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-18 03:46:53.250183 | orchestrator | 09c36d926166 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-18 03:46:53.250191 | orchestrator | 010c2b96e10f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-18 03:46:53.250199 | orchestrator | ea61e79a3432 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-18 03:46:53.250223 | orchestrator | d0506459f4fd registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-18 03:46:53.250232 | orchestrator | 8e10741aaa79 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-18 03:46:53.250240 | orchestrator | b3699a642d13 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-18 03:46:53.250255 | orchestrator | 409e6c740f7c registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-18 03:46:53.250263 | orchestrator | 7942514e03ee registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-18 03:46:53.250271 | orchestrator | c6e203f6ec25 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-18 03:46:53.250283 | orchestrator | 692629a3a80e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-18 03:46:53.250291 | orchestrator | 0981f9fe0308 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-18 03:46:53.250299 | orchestrator | 16a222a3bcf2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-18 03:46:53.250307 | orchestrator | f820306c7632 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-18 03:46:53.250315 | orchestrator | 794aa9a24a44 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-18 03:46:53.250324 | orchestrator | ba9d5eb26333 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-18 03:46:53.250334 | orchestrator | 29e24c134f4a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-18 03:46:53.250343 | orchestrator | 6f26ea0be8ee registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-18 03:46:53.250352 | orchestrator | 75252a0cd133 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-18 03:46:53.250361 | orchestrator | 26dd2ad8670d registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-18 03:46:53.250370 | orchestrator | f909485e8050 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-18 03:46:53.250384 | orchestrator | 9832914d76c9 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-18 03:46:53.250397 | orchestrator | 690d059f41ec registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-18 03:46:53.250416 | orchestrator | 1908168c2b96 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-18 03:46:53.250445 | orchestrator | 744e01367867 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-18 03:46:53.250459 | orchestrator | dbcec2412b4f registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-18 03:46:53.250473 | orchestrator | a56a0acfd8f2 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-18 03:46:53.250486 | orchestrator | 97894b7e294c registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-18 03:46:53.250499 | orchestrator | 1a19d723bbfd registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-18 03:46:53.250512 | orchestrator | 2960c138fad0 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-18 03:46:53.250527 | orchestrator | ddcd3227de7a registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-18 03:46:53.250540 | orchestrator | 862f791e447f registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-18 03:46:53.250553 | orchestrator | 33ea8d3a1ad0 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-18 03:46:53.250566 | orchestrator | 55997899a684 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 57 minutes ago Up 57 minutes (healthy) keystone_ssh 2026-03-18 03:46:53.250579 | orchestrator | 255abc4612e5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-03-18 03:46:53.250592 | orchestrator | 4ff57cdfad23 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-18 03:46:53.250612 | orchestrator | dfaa0207b10e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-18 03:46:53.250625 | orchestrator | 0bc4f5012c9c registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-18 03:46:53.250638 | orchestrator | 4b5665f45686 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-18 03:46:53.250652 | orchestrator | 2ca6531f839c registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-18 03:46:53.250665 | orchestrator | b7c8500ac483 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-18 03:46:53.250679 | orchestrator | 9253ec71803d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-18 03:46:53.250709 | orchestrator | 972855a94543 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-18 03:46:53.250723 | orchestrator | 6fe5636c2059 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-18 03:46:53.250738 | orchestrator | cf76106e5b0d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-18 03:46:53.250746 | orchestrator | 0fe7b58ac320 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-18 03:46:53.250755 | orchestrator | c0e05e7d95a0 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-18 03:46:53.250763 | orchestrator | 168bfe634208 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-18 03:46:53.250771 | orchestrator | 4d87c330bf3c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-18 03:46:53.250779 | orchestrator | ab2910b234d8 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-18 03:46:53.250786 | orchestrator | 4cad96fe601b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-18 03:46:53.250795 | orchestrator | 84a5dcbfe55f registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-18 03:46:53.250803 | orchestrator | 25794cf07f51 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-18 03:46:53.250811 | orchestrator | 7c818a0c3e1b registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-18 03:46:53.250819 | orchestrator | a5531912afe1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-18 03:46:53.250827 | orchestrator | 25bff5977411 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-18 03:46:53.615089 | orchestrator | 2026-03-18 03:46:53.615174 | orchestrator | ## Images @ testbed-node-0 2026-03-18 03:46:53.615187 | orchestrator | 2026-03-18 03:46:53.615222 | orchestrator | + echo 2026-03-18 03:46:53.615232 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-18 03:46:53.615241 | orchestrator | + echo 2026-03-18 03:46:53.615250 | orchestrator | + osism container testbed-node-0 images 2026-03-18 03:46:56.256176 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-18 03:46:56.256305 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-18 03:46:56.256327 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-18 03:46:56.256343 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-18 03:46:56.256400 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-18 03:46:56.256417 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-18 03:46:56.256432 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-18 03:46:56.256447 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-18 03:46:56.256461 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-18 03:46:56.256477 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-18 03:46:56.256493 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-18 03:46:56.256508 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-18 03:46:56.256523 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-18 03:46:56.256538 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-18 03:46:56.256554 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-18 03:46:56.256570 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-18 03:46:56.256585 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-18 03:46:56.256599 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-18 03:46:56.256632 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-18 03:46:56.256648 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-18 03:46:56.256663 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-18 03:46:56.256679 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-18 03:46:56.256693 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-18 03:46:56.256709 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-18 03:46:56.256725 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-18 03:46:56.256741 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-18 03:46:56.256757 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-18 03:46:56.256773 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-18 03:46:56.256788 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-18 03:46:56.256802 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-18 03:46:56.256831 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-18 03:46:56.256847 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-18 03:46:56.256888 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-18 03:46:56.256906 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-18 03:46:56.256921 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-18 03:46:56.256936 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-18 03:46:56.256952 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-18 03:46:56.256968 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-18 03:46:56.256984 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-18 03:46:56.256999 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-18 03:46:56.257047 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-18 03:46:56.257063 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-18 03:46:56.257080 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-18 03:46:56.257095 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-18 03:46:56.257112 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-18 03:46:56.257127 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-18 03:46:56.257142 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-18 03:46:56.257166 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-18 03:46:56.257180 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-18 03:46:56.257193 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-18 03:46:56.257206 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-18 03:46:56.257219 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-18 03:46:56.257232 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-18 03:46:56.257241 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-18 03:46:56.257249 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-18 03:46:56.257256 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-18 03:46:56.257273 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-18 03:46:56.257281 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-18 03:46:56.257288 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-18 03:46:56.257296 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-18 03:46:56.257304 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-18 03:46:56.257311 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-18 03:46:56.257319 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-18 03:46:56.257327 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-18 03:46:56.257348 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-18 03:46:56.257361 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-18 03:46:56.257375 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-18 03:46:56.257387 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-18 03:46:56.257399 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-18 03:46:56.257412 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-18 03:46:56.613966 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-18 03:46:56.614757 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-18 03:46:56.671384 | orchestrator | 2026-03-18 03:46:56.671452 | orchestrator | ## Containers @ testbed-node-1 2026-03-18 03:46:56.671463 | orchestrator | 2026-03-18 03:46:56.671483 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-18 03:46:56.671488 | orchestrator | + echo 2026-03-18 03:46:56.671494 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-18 03:46:56.671498 | orchestrator | + echo 2026-03-18 03:46:56.671503 | orchestrator | + osism container testbed-node-1 ps 2026-03-18 03:46:59.296212 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-18 03:46:59.296288 | orchestrator | f67c5b5a68b2 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-18 03:46:59.296297 | orchestrator | dcd85716aa76 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-18 03:46:59.296303 | orchestrator | 91259a8b1f8b registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-18 03:46:59.296308 | orchestrator | 5978074914a5 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-18 03:46:59.296329 | orchestrator | 73c586bf9fad registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-18 03:46:59.296348 | orchestrator | d35398e87014 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-18 03:46:59.296354 | orchestrator | eb9a761a8851 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-18 03:46:59.296362 | orchestrator | bc78dd85fa6f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-18 03:46:59.296367 | orchestrator | af759203b8d3 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-18 03:46:59.296372 | orchestrator | 7438126c9fa2 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-18 03:46:59.296377 | orchestrator | 623f2410e4c4 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-18 03:46:59.296382 | orchestrator | 84b533ae91c9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-18 03:46:59.296387 | orchestrator | 504698f4b199 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-18 03:46:59.296392 | orchestrator | e113ce0cac95 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-18 03:46:59.296397 | orchestrator | 7fd5904ad8e9 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-18 03:46:59.296401 | orchestrator | 99fc2818f4d1 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-18 03:46:59.296406 | orchestrator | 039a917a6505 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-18 03:46:59.296411 | orchestrator | d763b1f3d8be registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-18 03:46:59.296416 | orchestrator | 37bb41e75c2c registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-18 03:46:59.296432 | orchestrator | a49094075c8f registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-18 03:46:59.296438 | orchestrator | 5ae902e91cf8 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-18 03:46:59.296443 | orchestrator | 28ef6f86de11 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-18 03:46:59.296447 | orchestrator | 567c623a0c59 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-18 03:46:59.296457 | orchestrator | 8e4c41f4cee1 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-18 03:46:59.296462 | orchestrator | c88e95c7ee82 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-18 03:46:59.296467 | orchestrator | e470f453685d registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-18 03:46:59.296472 | orchestrator | 04fa18e0b3c9 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-18 03:46:59.296480 | orchestrator | de99afe20236 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-18 03:46:59.296485 | orchestrator | 4790552d3020 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-18 03:46:59.296490 | orchestrator | ab82668e8ebc registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-18 03:46:59.296495 | orchestrator | acc3b5506d04 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-18 03:46:59.296500 | orchestrator | b1078eb8f4b0 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-18 03:46:59.296505 | orchestrator | 79eb56b192e7 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-18 03:46:59.296510 | orchestrator | 0d890ed48f12 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-18 03:46:59.296514 | orchestrator | 046b7fb09371 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-18 03:46:59.296519 | orchestrator | 2cec8b3d1fc5 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-18 03:46:59.296524 | orchestrator | d5c0959d28ba registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-18 03:46:59.296529 | orchestrator | f9a36bc22f41 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-18 03:46:59.296536 | orchestrator | 1fc69a77b182 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-03-18 03:46:59.296549 | orchestrator | e4a7c04606b9 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-18 03:46:59.296575 | orchestrator | e88cd80c9d52 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-18 03:46:59.296598 | orchestrator | 373128e5bc44 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-18 03:46:59.296606 | orchestrator | 99c65aacdfd1 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-18 03:46:59.296613 | orchestrator | 2e6329ea53a7 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-18 03:46:59.296621 | orchestrator | 8a78485cd319 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-18 03:46:59.296628 | orchestrator | af003b1c3221 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-18 03:46:59.296636 | orchestrator | 39ce9f7ea2f4 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-18 03:46:59.296644 | orchestrator | c1c12f71e41e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-18 03:46:59.296651 | orchestrator | 3955269c405b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-18 03:46:59.296659 | orchestrator | 70272efeb2d1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-03-18 03:46:59.296667 | orchestrator | ded9a4e78417 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-18 03:46:59.296675 | orchestrator | 1edfdf2d0145 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-18 03:46:59.296683 | orchestrator | de5a8f2e4f10 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-18 03:46:59.296690 | orchestrator | 630714a165c7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-18 03:46:59.296699 | orchestrator | d666bc557480 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-18 03:46:59.296704 | orchestrator | 6f473337a285 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-18 03:46:59.296708 | orchestrator | dfc86792addb registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-18 03:46:59.296713 | orchestrator | b95bc8513ca9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-18 03:46:59.296722 | orchestrator | 96042fc133f6 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-18 03:46:59.296732 | orchestrator | 5acd54c07e35 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-18 03:46:59.296737 | orchestrator | 1d76b85ff7d8 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-18 03:46:59.296742 | orchestrator | 9167f333d474 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-18 03:46:59.296747 | orchestrator | 700a1abb5cac registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-18 03:46:59.296752 | orchestrator | 3768285d2a1a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-18 03:46:59.296756 | orchestrator | 97b815353fab registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-18 03:46:59.296761 | orchestrator | f09125af6460 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-18 03:46:59.296766 | orchestrator | a1bf91fd74c1 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-18 03:46:59.296771 | orchestrator | 50da5d0cbb6e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-18 03:46:59.296777 | orchestrator | 04719cc3fbf0 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-18 03:46:59.296785 | orchestrator | 07617bfbf8fb registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-18 03:46:59.296794 | orchestrator | 098660b03a5d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-18 03:46:59.682547 | orchestrator | 2026-03-18 03:46:59.682644 | orchestrator | ## Images @ testbed-node-1 2026-03-18 03:46:59.682661 | orchestrator | 2026-03-18 03:46:59.682709 | orchestrator | + echo 2026-03-18 03:46:59.682723 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-18 03:46:59.682735 | orchestrator | + echo 2026-03-18 03:46:59.682746 | orchestrator | + osism container testbed-node-1 images 2026-03-18 03:47:02.269112 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-18 03:47:02.269219 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-18 03:47:02.269235 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-18 03:47:02.269248 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-18 03:47:02.269261 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-18 03:47:02.269272 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-18 03:47:02.269306 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-18 03:47:02.269318 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-18 03:47:02.269328 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-18 03:47:02.269339 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-18 03:47:02.269350 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-18 03:47:02.269360 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-18 03:47:02.269371 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-18 03:47:02.269382 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-18 03:47:02.269393 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-18 03:47:02.269403 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-18 03:47:02.269414 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-18 03:47:02.269425 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-18 03:47:02.269435 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-18 03:47:02.269446 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-18 03:47:02.269474 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-18 03:47:02.269486 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-18 03:47:02.269497 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-18 03:47:02.269507 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-18 03:47:02.269518 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-18 03:47:02.269529 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-18 03:47:02.269540 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-18 03:47:02.269555 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-18 03:47:02.269566 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-18 03:47:02.269577 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-18 03:47:02.269587 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-18 03:47:02.269598 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-18 03:47:02.269635 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-18 03:47:02.269649 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-18 03:47:02.269662 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-18 03:47:02.269674 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-18 03:47:02.269687 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-18 03:47:02.269699 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-18 03:47:02.269711 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-18 03:47:02.269724 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-18 03:47:02.269736 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-18 03:47:02.269748 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-18 03:47:02.269761 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-18 03:47:02.269773 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-18 03:47:02.269785 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-18 03:47:02.269798 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-18 03:47:02.269811 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-18 03:47:02.269824 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-18 03:47:02.269836 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-18 03:47:02.269849 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-18 03:47:02.269888 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-18 03:47:02.269901 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-18 03:47:02.269912 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-18 03:47:02.269923 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-18 03:47:02.269934 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-18 03:47:02.269944 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-18 03:47:02.269955 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-18 03:47:02.269966 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-18 03:47:02.269985 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-18 03:47:02.269996 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-18 03:47:02.270103 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-18 03:47:02.270119 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-18 03:47:02.270130 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-18 03:47:02.270176 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-18 03:47:02.270196 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-18 03:47:02.270207 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-18 03:47:02.270218 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-18 03:47:02.270228 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-18 03:47:02.270239 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-18 03:47:02.270250 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-18 03:47:02.645726 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-18 03:47:02.646790 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-18 03:47:02.716552 | orchestrator | 2026-03-18 03:47:02.716662 | orchestrator | ## Containers @ testbed-node-2 2026-03-18 03:47:02.716683 | orchestrator | 2026-03-18 03:47:02.716743 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-18 03:47:02.716763 | orchestrator | + echo 2026-03-18 03:47:02.716781 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-18 03:47:02.716796 | orchestrator | + echo 2026-03-18 03:47:02.716807 | orchestrator | + osism container testbed-node-2 ps 2026-03-18 03:47:05.236866 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-18 03:47:05.237089 | orchestrator | ac3d8bc3cc58 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-18 03:47:05.237127 | orchestrator | 0ac1c6a86f7c registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-18 03:47:05.237146 | orchestrator | a187930f5443 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-18 03:47:05.237163 | orchestrator | 667d6f1bdec5 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-18 03:47:05.237181 | orchestrator | d67f9d5c28ec registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-03-18 03:47:05.237199 | orchestrator | 8649d04de437 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-18 03:47:05.237217 | orchestrator | 8acf26dbb968 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-18 03:47:05.237266 | orchestrator | 96f14aa4333d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-18 03:47:05.237278 | orchestrator | a99d8719f3bc registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-18 03:47:05.237288 | orchestrator | 76ca0a542925 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-18 03:47:05.237297 | orchestrator | e38c3b4b981a registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-18 03:47:05.237313 | orchestrator | d4ee36ffc9fc registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-18 03:47:05.237323 | orchestrator | 5e1acf5c5c2c registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-18 03:47:05.237333 | orchestrator | 46b78eb15532 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-18 03:47:05.237343 | orchestrator | fd80e3c87c3f registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-18 03:47:05.237352 | orchestrator | a3e64c6a1367 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-18 03:47:05.237362 | orchestrator | 1d80524a3e62 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-18 03:47:05.237372 | orchestrator | ceb7ae5e569f registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-18 03:47:05.237381 | orchestrator | 28abff2ef5fe registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-18 03:47:05.237413 | orchestrator | 26baae6b3856 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-18 03:47:05.237425 | orchestrator | 773e292ca9e0 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-18 03:47:05.237436 | orchestrator | b6b5383e902d registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-18 03:47:05.237447 | orchestrator | 76ce1ae3170c registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-18 03:47:05.237459 | orchestrator | 853dccb0bcdd registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-18 03:47:05.237477 | orchestrator | e6b23f4229af registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-18 03:47:05.237488 | orchestrator | 8aa8c44d2d4d registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-18 03:47:05.237500 | orchestrator | 4e67eb89e642 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-18 03:47:05.237510 | orchestrator | 4925828369b6 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-18 03:47:05.237522 | orchestrator | 363c59b27536 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-18 03:47:05.237533 | orchestrator | 9fdc7fcbb572 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-18 03:47:05.237545 | orchestrator | 7c13971d5369 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-18 03:47:05.237556 | orchestrator | cc8e77892ee2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-18 03:47:05.237567 | orchestrator | cb14cb2e7973 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-18 03:47:05.237580 | orchestrator | 674f229869c9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-18 03:47:05.237597 | orchestrator | 729585372b27 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-18 03:47:05.237615 | orchestrator | fd53ccd0e5da registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-18 03:47:05.237631 | orchestrator | 4dbd1c79242c registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) glance_api 2026-03-18 03:47:05.237647 | orchestrator | 80c36517f145 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-18 03:47:05.237671 | orchestrator | 2f86dc9766d5 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) skyline_apiserver 2026-03-18 03:47:05.237699 | orchestrator | 94e12e9b3dd5 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-18 03:47:05.237716 | orchestrator | 4abbb0de4c3b registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-18 03:47:05.237734 | orchestrator | b73ce380665a registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-18 03:47:05.237761 | orchestrator | 63399719b234 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-18 03:47:05.237777 | orchestrator | 4816ded20f45 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-18 03:47:05.237794 | orchestrator | be7f9870daf8 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-18 03:47:05.237810 | orchestrator | 20c69a3b4520 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-18 03:47:05.237826 | orchestrator | 478d96f989a3 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-18 03:47:05.237836 | orchestrator | 5f1e6b17a04c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-18 03:47:05.237846 | orchestrator | 5de6da62fd0f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-18 03:47:05.237856 | orchestrator | 0d4c2a98fac4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-03-18 03:47:05.237865 | orchestrator | 6695d0585489 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-18 03:47:05.237875 | orchestrator | fc8e238828f1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-18 03:47:05.237890 | orchestrator | 276037d9386d registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-18 03:47:05.237900 | orchestrator | 8588bc2a2ea4 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-18 03:47:05.237910 | orchestrator | d80c35aa7891 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-18 03:47:05.237919 | orchestrator | 45dbb5905ddc registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-18 03:47:05.237929 | orchestrator | 17d836f5837c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-18 03:47:05.237939 | orchestrator | f3796c7a4297 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-18 03:47:05.237948 | orchestrator | 264675089f37 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-18 03:47:05.237965 | orchestrator | 45e9efba49d5 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-18 03:47:05.237982 | orchestrator | e9f8bdaa5fd6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-18 03:47:05.237992 | orchestrator | 63f4ccd2f14d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-18 03:47:05.238002 | orchestrator | 70d96fca38b8 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-18 03:47:05.238124 | orchestrator | 0a700932c9e2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-18 03:47:05.238134 | orchestrator | 633bd7b358b2 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-18 03:47:05.238144 | orchestrator | b48f7d63ca86 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-18 03:47:05.238154 | orchestrator | 5a230000c714 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-18 03:47:05.238164 | orchestrator | 5e5fbe13d459 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-18 03:47:05.238174 | orchestrator | 5caee5cfa7f7 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-18 03:47:05.238184 | orchestrator | 9e4462eccd5e registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-18 03:47:05.238194 | orchestrator | 98833fe28bf0 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-18 03:47:05.610722 | orchestrator | 2026-03-18 03:47:05.610914 | orchestrator | ## Images @ testbed-node-2 2026-03-18 03:47:05.610939 | orchestrator | 2026-03-18 03:47:05.610983 | orchestrator | + echo 2026-03-18 03:47:05.610994 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-18 03:47:05.611056 | orchestrator | + echo 2026-03-18 03:47:05.611066 | orchestrator | + osism container testbed-node-2 images 2026-03-18 03:47:08.175876 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-18 03:47:08.175967 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-18 03:47:08.175977 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-18 03:47:08.175986 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-18 03:47:08.175993 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-18 03:47:08.176000 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-18 03:47:08.176034 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-18 03:47:08.176041 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-18 03:47:08.176070 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-18 03:47:08.176077 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-18 03:47:08.176083 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-18 03:47:08.176093 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-18 03:47:08.176100 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-18 03:47:08.176107 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-18 03:47:08.176114 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-18 03:47:08.176134 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-18 03:47:08.176141 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-18 03:47:08.176147 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-18 03:47:08.176153 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-18 03:47:08.176159 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-18 03:47:08.176165 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-18 03:47:08.176172 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-18 03:47:08.176178 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-18 03:47:08.176184 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-18 03:47:08.176189 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-18 03:47:08.176195 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-18 03:47:08.176201 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-18 03:47:08.176207 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-18 03:47:08.176213 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-18 03:47:08.176219 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-18 03:47:08.176225 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-18 03:47:08.176232 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-18 03:47:08.176254 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-18 03:47:08.176260 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-18 03:47:08.176278 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-18 03:47:08.176284 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-18 03:47:08.176290 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-18 03:47:08.176296 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-18 03:47:08.176302 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-18 03:47:08.176309 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-18 03:47:08.176315 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-18 03:47:08.176322 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-18 03:47:08.176328 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-18 03:47:08.176334 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-18 03:47:08.176340 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-18 03:47:08.176346 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-18 03:47:08.176351 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-18 03:47:08.176357 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-18 03:47:08.176363 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-18 03:47:08.176369 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-18 03:47:08.176375 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-18 03:47:08.176382 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-18 03:47:08.176388 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-18 03:47:08.176394 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-18 03:47:08.176400 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-18 03:47:08.176407 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-18 03:47:08.176413 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-18 03:47:08.176419 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-18 03:47:08.176425 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-18 03:47:08.176431 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-18 03:47:08.176442 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-18 03:47:08.176448 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-18 03:47:08.176454 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-18 03:47:08.176461 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-18 03:47:08.176471 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-18 03:47:08.176478 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-18 03:47:08.176484 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-18 03:47:08.176491 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-18 03:47:08.176497 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-18 03:47:08.176503 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-18 03:47:08.530758 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-18 03:47:08.539741 | orchestrator | + set -e 2026-03-18 03:47:08.539827 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 03:47:08.539841 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 03:47:08.539852 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 03:47:08.539861 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 03:47:08.539870 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 03:47:08.539877 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 03:47:08.539884 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 03:47:08.539889 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:47:08.539895 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:47:08.539901 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 03:47:08.539907 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 03:47:08.539913 | orchestrator | ++ export ARA=false 2026-03-18 03:47:08.539918 | orchestrator | ++ ARA=false 2026-03-18 03:47:08.539924 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 03:47:08.539930 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 03:47:08.539935 | orchestrator | ++ export TEMPEST=false 2026-03-18 03:47:08.539941 | orchestrator | ++ TEMPEST=false 2026-03-18 03:47:08.539946 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 03:47:08.539952 | orchestrator | ++ IS_ZUUL=true 2026-03-18 03:47:08.539957 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:47:08.539963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:47:08.539968 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 03:47:08.539974 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 03:47:08.539979 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 03:47:08.539985 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 03:47:08.539991 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 03:47:08.539997 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 03:47:08.540041 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 03:47:08.540048 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 03:47:08.540053 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 03:47:08.540062 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-18 03:47:08.548778 | orchestrator | + set -e 2026-03-18 03:47:08.548846 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 03:47:08.548854 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 03:47:08.548861 | orchestrator | ++ INTERACTIVE=false 2026-03-18 03:47:08.548867 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 03:47:08.548873 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 03:47:08.548880 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-18 03:47:08.548968 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:47:08.552758 | orchestrator | 2026-03-18 03:47:08.552814 | orchestrator | # Ceph status 2026-03-18 03:47:08.552822 | orchestrator | 2026-03-18 03:47:08.552846 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:47:08.552857 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:47:08.552872 | orchestrator | + echo 2026-03-18 03:47:08.552885 | orchestrator | + echo '# Ceph status' 2026-03-18 03:47:08.552894 | orchestrator | + echo 2026-03-18 03:47:08.552904 | orchestrator | + ceph -s 2026-03-18 03:47:09.198721 | orchestrator | cluster: 2026-03-18 03:47:09.198813 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-18 03:47:09.198828 | orchestrator | health: HEALTH_OK 2026-03-18 03:47:09.198838 | orchestrator | 2026-03-18 03:47:09.198848 | orchestrator | services: 2026-03-18 03:47:09.198857 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 70m) 2026-03-18 03:47:09.198878 | orchestrator | mgr: testbed-node-0(active, since 58m), standbys: testbed-node-2, testbed-node-1 2026-03-18 03:47:09.198888 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-18 03:47:09.198898 | orchestrator | osd: 6 osds: 6 up (since 67m), 6 in (since 68m) 2026-03-18 03:47:09.198907 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-18 03:47:09.198916 | orchestrator | 2026-03-18 03:47:09.198925 | orchestrator | data: 2026-03-18 03:47:09.198934 | orchestrator | volumes: 1/1 healthy 2026-03-18 03:47:09.198943 | orchestrator | pools: 14 pools, 401 pgs 2026-03-18 03:47:09.198952 | orchestrator | objects: 552 objects, 2.2 GiB 2026-03-18 03:47:09.198961 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-18 03:47:09.198970 | orchestrator | pgs: 401 active+clean 2026-03-18 03:47:09.198978 | orchestrator | 2026-03-18 03:47:09.248540 | orchestrator | 2026-03-18 03:47:09.248623 | orchestrator | # Ceph versions 2026-03-18 03:47:09.248635 | orchestrator | 2026-03-18 03:47:09.248667 | orchestrator | + echo 2026-03-18 03:47:09.248677 | orchestrator | + echo '# Ceph versions' 2026-03-18 03:47:09.248686 | orchestrator | + echo 2026-03-18 03:47:09.248691 | orchestrator | + ceph versions 2026-03-18 03:47:09.844515 | orchestrator | { 2026-03-18 03:47:09.844609 | orchestrator | "mon": { 2026-03-18 03:47:09.844623 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-18 03:47:09.844634 | orchestrator | }, 2026-03-18 03:47:09.844645 | orchestrator | "mgr": { 2026-03-18 03:47:09.844655 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-18 03:47:09.844665 | orchestrator | }, 2026-03-18 03:47:09.844674 | orchestrator | "osd": { 2026-03-18 03:47:09.844684 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-18 03:47:09.844694 | orchestrator | }, 2026-03-18 03:47:09.844703 | orchestrator | "mds": { 2026-03-18 03:47:09.844713 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-18 03:47:09.844723 | orchestrator | }, 2026-03-18 03:47:09.844732 | orchestrator | "rgw": { 2026-03-18 03:47:09.844742 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-18 03:47:09.844752 | orchestrator | }, 2026-03-18 03:47:09.844761 | orchestrator | "overall": { 2026-03-18 03:47:09.844793 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-18 03:47:09.844803 | orchestrator | } 2026-03-18 03:47:09.844813 | orchestrator | } 2026-03-18 03:47:09.903498 | orchestrator | 2026-03-18 03:47:09.903578 | orchestrator | # Ceph OSD tree 2026-03-18 03:47:09.903592 | orchestrator | 2026-03-18 03:47:09.903656 | orchestrator | + echo 2026-03-18 03:47:09.903667 | orchestrator | + echo '# Ceph OSD tree' 2026-03-18 03:47:09.903676 | orchestrator | + echo 2026-03-18 03:47:09.903683 | orchestrator | + ceph osd df tree 2026-03-18 03:47:10.436399 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-18 03:47:10.436522 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 385 MiB 113 GiB 5.88 1.00 - root default 2026-03-18 03:47:10.436547 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-03-18 03:47:10.436566 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.45 1.10 192 up osd.1 2026-03-18 03:47:10.436584 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 62 MiB 19 GiB 5.28 0.90 196 up osd.4 2026-03-18 03:47:10.436637 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-03-18 03:47:10.436677 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 7.04 1.20 201 up osd.0 2026-03-18 03:47:10.436694 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 960 MiB 899 MiB 1 KiB 62 MiB 19 GiB 4.69 0.80 189 up osd.5 2026-03-18 03:47:10.436711 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-03-18 03:47:10.436733 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.40 1.09 192 up osd.2 2026-03-18 03:47:10.436752 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.42 0.92 200 up osd.3 2026-03-18 03:47:10.436769 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 385 MiB 113 GiB 5.88 2026-03-18 03:47:10.436785 | orchestrator | MIN/MAX VAR: 0.80/1.20 STDDEV: 0.81 2026-03-18 03:47:10.490614 | orchestrator | 2026-03-18 03:47:10.490735 | orchestrator | # Ceph monitor status 2026-03-18 03:47:10.490761 | orchestrator | 2026-03-18 03:47:10.490831 | orchestrator | + echo 2026-03-18 03:47:10.490856 | orchestrator | + echo '# Ceph monitor status' 2026-03-18 03:47:10.490874 | orchestrator | + echo 2026-03-18 03:47:10.490893 | orchestrator | + ceph mon stat 2026-03-18 03:47:11.117310 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-18 03:47:11.161923 | orchestrator | 2026-03-18 03:47:11.162107 | orchestrator | # Ceph quorum status 2026-03-18 03:47:11.162126 | orchestrator | 2026-03-18 03:47:11.162168 | orchestrator | + echo 2026-03-18 03:47:11.162180 | orchestrator | + echo '# Ceph quorum status' 2026-03-18 03:47:11.162191 | orchestrator | + echo 2026-03-18 03:47:11.162253 | orchestrator | + ceph quorum_status 2026-03-18 03:47:11.162566 | orchestrator | + jq 2026-03-18 03:47:11.868627 | orchestrator | { 2026-03-18 03:47:11.868733 | orchestrator | "election_epoch": 4, 2026-03-18 03:47:11.868752 | orchestrator | "quorum": [ 2026-03-18 03:47:11.868765 | orchestrator | 0, 2026-03-18 03:47:11.868778 | orchestrator | 1, 2026-03-18 03:47:11.868790 | orchestrator | 2 2026-03-18 03:47:11.868797 | orchestrator | ], 2026-03-18 03:47:11.868804 | orchestrator | "quorum_names": [ 2026-03-18 03:47:11.868811 | orchestrator | "testbed-node-0", 2026-03-18 03:47:11.868817 | orchestrator | "testbed-node-1", 2026-03-18 03:47:11.868824 | orchestrator | "testbed-node-2" 2026-03-18 03:47:11.868831 | orchestrator | ], 2026-03-18 03:47:11.868838 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-18 03:47:11.868846 | orchestrator | "quorum_age": 4248, 2026-03-18 03:47:11.868853 | orchestrator | "features": { 2026-03-18 03:47:11.868860 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-18 03:47:11.868868 | orchestrator | "quorum_mon": [ 2026-03-18 03:47:11.868875 | orchestrator | "kraken", 2026-03-18 03:47:11.868882 | orchestrator | "luminous", 2026-03-18 03:47:11.868889 | orchestrator | "mimic", 2026-03-18 03:47:11.868896 | orchestrator | "osdmap-prune", 2026-03-18 03:47:11.868904 | orchestrator | "nautilus", 2026-03-18 03:47:11.868911 | orchestrator | "octopus", 2026-03-18 03:47:11.868918 | orchestrator | "pacific", 2026-03-18 03:47:11.868925 | orchestrator | "elector-pinging", 2026-03-18 03:47:11.868932 | orchestrator | "quincy", 2026-03-18 03:47:11.868939 | orchestrator | "reef" 2026-03-18 03:47:11.868947 | orchestrator | ] 2026-03-18 03:47:11.868954 | orchestrator | }, 2026-03-18 03:47:11.868961 | orchestrator | "monmap": { 2026-03-18 03:47:11.868968 | orchestrator | "epoch": 1, 2026-03-18 03:47:11.868975 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-18 03:47:11.868984 | orchestrator | "modified": "2026-03-18T02:36:11.141327Z", 2026-03-18 03:47:11.868991 | orchestrator | "created": "2026-03-18T02:36:11.141327Z", 2026-03-18 03:47:11.868998 | orchestrator | "min_mon_release": 18, 2026-03-18 03:47:11.869045 | orchestrator | "min_mon_release_name": "reef", 2026-03-18 03:47:11.869053 | orchestrator | "election_strategy": 1, 2026-03-18 03:47:11.869060 | orchestrator | "disallowed_leaders: ": "", 2026-03-18 03:47:11.869095 | orchestrator | "stretch_mode": false, 2026-03-18 03:47:11.869108 | orchestrator | "tiebreaker_mon": "", 2026-03-18 03:47:11.869121 | orchestrator | "removed_ranks: ": "", 2026-03-18 03:47:11.869135 | orchestrator | "features": { 2026-03-18 03:47:11.869147 | orchestrator | "persistent": [ 2026-03-18 03:47:11.869160 | orchestrator | "kraken", 2026-03-18 03:47:11.869168 | orchestrator | "luminous", 2026-03-18 03:47:11.869175 | orchestrator | "mimic", 2026-03-18 03:47:11.869184 | orchestrator | "osdmap-prune", 2026-03-18 03:47:11.869193 | orchestrator | "nautilus", 2026-03-18 03:47:11.869201 | orchestrator | "octopus", 2026-03-18 03:47:11.869209 | orchestrator | "pacific", 2026-03-18 03:47:11.869217 | orchestrator | "elector-pinging", 2026-03-18 03:47:11.869225 | orchestrator | "quincy", 2026-03-18 03:47:11.869233 | orchestrator | "reef" 2026-03-18 03:47:11.869242 | orchestrator | ], 2026-03-18 03:47:11.869250 | orchestrator | "optional": [] 2026-03-18 03:47:11.869258 | orchestrator | }, 2026-03-18 03:47:11.869267 | orchestrator | "mons": [ 2026-03-18 03:47:11.869275 | orchestrator | { 2026-03-18 03:47:11.869283 | orchestrator | "rank": 0, 2026-03-18 03:47:11.869291 | orchestrator | "name": "testbed-node-0", 2026-03-18 03:47:11.869299 | orchestrator | "public_addrs": { 2026-03-18 03:47:11.869308 | orchestrator | "addrvec": [ 2026-03-18 03:47:11.869316 | orchestrator | { 2026-03-18 03:47:11.869324 | orchestrator | "type": "v2", 2026-03-18 03:47:11.869333 | orchestrator | "addr": "192.168.16.8:3300", 2026-03-18 03:47:11.869342 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869350 | orchestrator | }, 2026-03-18 03:47:11.869358 | orchestrator | { 2026-03-18 03:47:11.869366 | orchestrator | "type": "v1", 2026-03-18 03:47:11.869395 | orchestrator | "addr": "192.168.16.8:6789", 2026-03-18 03:47:11.869402 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869410 | orchestrator | } 2026-03-18 03:47:11.869417 | orchestrator | ] 2026-03-18 03:47:11.869424 | orchestrator | }, 2026-03-18 03:47:11.869431 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-03-18 03:47:11.869438 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-03-18 03:47:11.869446 | orchestrator | "priority": 0, 2026-03-18 03:47:11.869453 | orchestrator | "weight": 0, 2026-03-18 03:47:11.869460 | orchestrator | "crush_location": "{}" 2026-03-18 03:47:11.869468 | orchestrator | }, 2026-03-18 03:47:11.869475 | orchestrator | { 2026-03-18 03:47:11.869482 | orchestrator | "rank": 1, 2026-03-18 03:47:11.869489 | orchestrator | "name": "testbed-node-1", 2026-03-18 03:47:11.869496 | orchestrator | "public_addrs": { 2026-03-18 03:47:11.869504 | orchestrator | "addrvec": [ 2026-03-18 03:47:11.869511 | orchestrator | { 2026-03-18 03:47:11.869518 | orchestrator | "type": "v2", 2026-03-18 03:47:11.869526 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-18 03:47:11.869533 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869540 | orchestrator | }, 2026-03-18 03:47:11.869547 | orchestrator | { 2026-03-18 03:47:11.869554 | orchestrator | "type": "v1", 2026-03-18 03:47:11.869561 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-18 03:47:11.869568 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869576 | orchestrator | } 2026-03-18 03:47:11.869583 | orchestrator | ] 2026-03-18 03:47:11.869590 | orchestrator | }, 2026-03-18 03:47:11.869597 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-18 03:47:11.869604 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-18 03:47:11.869623 | orchestrator | "priority": 0, 2026-03-18 03:47:11.869630 | orchestrator | "weight": 0, 2026-03-18 03:47:11.869637 | orchestrator | "crush_location": "{}" 2026-03-18 03:47:11.869644 | orchestrator | }, 2026-03-18 03:47:11.869652 | orchestrator | { 2026-03-18 03:47:11.869667 | orchestrator | "rank": 2, 2026-03-18 03:47:11.869674 | orchestrator | "name": "testbed-node-2", 2026-03-18 03:47:11.869682 | orchestrator | "public_addrs": { 2026-03-18 03:47:11.869689 | orchestrator | "addrvec": [ 2026-03-18 03:47:11.869696 | orchestrator | { 2026-03-18 03:47:11.869703 | orchestrator | "type": "v2", 2026-03-18 03:47:11.869710 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-18 03:47:11.869717 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869725 | orchestrator | }, 2026-03-18 03:47:11.869732 | orchestrator | { 2026-03-18 03:47:11.869739 | orchestrator | "type": "v1", 2026-03-18 03:47:11.869746 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-18 03:47:11.869753 | orchestrator | "nonce": 0 2026-03-18 03:47:11.869767 | orchestrator | } 2026-03-18 03:47:11.869774 | orchestrator | ] 2026-03-18 03:47:11.869785 | orchestrator | }, 2026-03-18 03:47:11.869797 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-18 03:47:11.869808 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-18 03:47:11.869821 | orchestrator | "priority": 0, 2026-03-18 03:47:11.869850 | orchestrator | "weight": 0, 2026-03-18 03:47:11.869864 | orchestrator | "crush_location": "{}" 2026-03-18 03:47:11.869872 | orchestrator | } 2026-03-18 03:47:11.869879 | orchestrator | ] 2026-03-18 03:47:11.869886 | orchestrator | } 2026-03-18 03:47:11.869893 | orchestrator | } 2026-03-18 03:47:11.870084 | orchestrator | 2026-03-18 03:47:11.870100 | orchestrator | # Ceph free space status 2026-03-18 03:47:11.870108 | orchestrator | 2026-03-18 03:47:11.870115 | orchestrator | + echo 2026-03-18 03:47:11.870122 | orchestrator | + echo '# Ceph free space status' 2026-03-18 03:47:11.870129 | orchestrator | + echo 2026-03-18 03:47:11.870137 | orchestrator | + ceph df 2026-03-18 03:47:12.507294 | orchestrator | --- RAW STORAGE --- 2026-03-18 03:47:12.507380 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-18 03:47:12.507405 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-03-18 03:47:12.507417 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-03-18 03:47:12.507429 | orchestrator | 2026-03-18 03:47:12.507441 | orchestrator | --- POOLS --- 2026-03-18 03:47:12.507453 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-18 03:47:12.507466 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-18 03:47:12.507477 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-18 03:47:12.507488 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-18 03:47:12.507499 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-18 03:47:12.507512 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-18 03:47:12.507524 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-18 03:47:12.507536 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-18 03:47:12.507548 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-18 03:47:12.507559 | orchestrator | .rgw.root 9 32 1.4 KiB 4 32 KiB 0 53 GiB 2026-03-18 03:47:12.507586 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-18 03:47:12.507598 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-18 03:47:12.507611 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-03-18 03:47:12.507623 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-18 03:47:12.507635 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-18 03:47:12.565261 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-18 03:47:12.632550 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-18 03:47:12.632652 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-18 03:47:12.632672 | orchestrator | + osism apply facts 2026-03-18 03:47:14.924989 | orchestrator | 2026-03-18 03:47:14 | INFO  | Task a171c702-f2b8-462f-ac15-a7d800ada649 (facts) was prepared for execution. 2026-03-18 03:47:14.925089 | orchestrator | 2026-03-18 03:47:14 | INFO  | It takes a moment until task a171c702-f2b8-462f-ac15-a7d800ada649 (facts) has been started and output is visible here. 2026-03-18 03:47:30.004738 | orchestrator | 2026-03-18 03:47:30.004888 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-18 03:47:30.004911 | orchestrator | 2026-03-18 03:47:30.004923 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-18 03:47:30.004935 | orchestrator | Wednesday 18 March 2026 03:47:19 +0000 (0:00:00.321) 0:00:00.321 ******* 2026-03-18 03:47:30.004946 | orchestrator | ok: [testbed-manager] 2026-03-18 03:47:30.004958 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:47:30.004969 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:47:30.004980 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:47:30.005039 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:47:30.005084 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:47:30.005095 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:47:30.005106 | orchestrator | 2026-03-18 03:47:30.005117 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-18 03:47:30.005129 | orchestrator | Wednesday 18 March 2026 03:47:21 +0000 (0:00:01.330) 0:00:01.651 ******* 2026-03-18 03:47:30.005139 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:47:30.005151 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:47:30.005162 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:47:30.005173 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:47:30.005186 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:47:30.005198 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:47:30.005211 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:47:30.005224 | orchestrator | 2026-03-18 03:47:30.005238 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 03:47:30.005250 | orchestrator | 2026-03-18 03:47:30.005263 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 03:47:30.005276 | orchestrator | Wednesday 18 March 2026 03:47:22 +0000 (0:00:01.431) 0:00:03.083 ******* 2026-03-18 03:47:30.005289 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:47:30.005302 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:47:30.005314 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:47:30.005327 | orchestrator | ok: [testbed-manager] 2026-03-18 03:47:30.005339 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:47:30.005352 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:47:30.005364 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:47:30.005376 | orchestrator | 2026-03-18 03:47:30.005389 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-18 03:47:30.005402 | orchestrator | 2026-03-18 03:47:30.005415 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-18 03:47:30.005427 | orchestrator | Wednesday 18 March 2026 03:47:28 +0000 (0:00:06.467) 0:00:09.551 ******* 2026-03-18 03:47:30.005441 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:47:30.005453 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:47:30.005466 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:47:30.005478 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:47:30.005491 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:47:30.005504 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:47:30.005517 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:47:30.005530 | orchestrator | 2026-03-18 03:47:30.005541 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:47:30.005552 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005565 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005576 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005587 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005598 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005609 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005619 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:47:30.005630 | orchestrator | 2026-03-18 03:47:30.005641 | orchestrator | 2026-03-18 03:47:30.005652 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:47:30.005671 | orchestrator | Wednesday 18 March 2026 03:47:29 +0000 (0:00:00.590) 0:00:10.142 ******* 2026-03-18 03:47:30.005682 | orchestrator | =============================================================================== 2026-03-18 03:47:30.005693 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.47s 2026-03-18 03:47:30.005703 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.43s 2026-03-18 03:47:30.005714 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.33s 2026-03-18 03:47:30.005725 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-18 03:47:30.332715 | orchestrator | + osism validate ceph-mons 2026-03-18 03:48:03.044872 | orchestrator | 2026-03-18 03:48:03.045109 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-18 03:48:03.045145 | orchestrator | 2026-03-18 03:48:03.045186 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-18 03:48:03.045235 | orchestrator | Wednesday 18 March 2026 03:47:47 +0000 (0:00:00.435) 0:00:00.435 ******* 2026-03-18 03:48:03.045259 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.045279 | orchestrator | 2026-03-18 03:48:03.045294 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-18 03:48:03.045307 | orchestrator | Wednesday 18 March 2026 03:47:47 +0000 (0:00:00.849) 0:00:01.284 ******* 2026-03-18 03:48:03.045326 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.045345 | orchestrator | 2026-03-18 03:48:03.045365 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-18 03:48:03.045386 | orchestrator | Wednesday 18 March 2026 03:47:48 +0000 (0:00:01.028) 0:00:02.313 ******* 2026-03-18 03:48:03.045407 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.045427 | orchestrator | 2026-03-18 03:48:03.045447 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-18 03:48:03.045468 | orchestrator | Wednesday 18 March 2026 03:47:49 +0000 (0:00:00.116) 0:00:02.429 ******* 2026-03-18 03:48:03.045487 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.045506 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:03.045526 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:03.045545 | orchestrator | 2026-03-18 03:48:03.045565 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-18 03:48:03.045577 | orchestrator | Wednesday 18 March 2026 03:47:49 +0000 (0:00:00.309) 0:00:02.738 ******* 2026-03-18 03:48:03.045588 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:03.045599 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.045609 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:03.045620 | orchestrator | 2026-03-18 03:48:03.045631 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-18 03:48:03.045642 | orchestrator | Wednesday 18 March 2026 03:47:50 +0000 (0:00:01.115) 0:00:03.854 ******* 2026-03-18 03:48:03.045660 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.045680 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:48:03.045699 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:48:03.045718 | orchestrator | 2026-03-18 03:48:03.045730 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-18 03:48:03.045747 | orchestrator | Wednesday 18 March 2026 03:47:50 +0000 (0:00:00.315) 0:00:04.170 ******* 2026-03-18 03:48:03.045766 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.045785 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:03.045804 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:03.045823 | orchestrator | 2026-03-18 03:48:03.045843 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:48:03.045860 | orchestrator | Wednesday 18 March 2026 03:47:51 +0000 (0:00:00.519) 0:00:04.689 ******* 2026-03-18 03:48:03.045879 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.045898 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:03.045917 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:03.045932 | orchestrator | 2026-03-18 03:48:03.046005 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-18 03:48:03.046113 | orchestrator | Wednesday 18 March 2026 03:47:51 +0000 (0:00:00.318) 0:00:05.007 ******* 2026-03-18 03:48:03.046130 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046146 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:48:03.046165 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:48:03.046183 | orchestrator | 2026-03-18 03:48:03.046204 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-18 03:48:03.046223 | orchestrator | Wednesday 18 March 2026 03:47:51 +0000 (0:00:00.284) 0:00:05.292 ******* 2026-03-18 03:48:03.046237 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.046248 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:03.046258 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:03.046270 | orchestrator | 2026-03-18 03:48:03.046291 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:48:03.046310 | orchestrator | Wednesday 18 March 2026 03:47:52 +0000 (0:00:00.494) 0:00:05.787 ******* 2026-03-18 03:48:03.046330 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046348 | orchestrator | 2026-03-18 03:48:03.046368 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:48:03.046381 | orchestrator | Wednesday 18 March 2026 03:47:52 +0000 (0:00:00.255) 0:00:06.042 ******* 2026-03-18 03:48:03.046392 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046403 | orchestrator | 2026-03-18 03:48:03.046416 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:48:03.046435 | orchestrator | Wednesday 18 March 2026 03:47:52 +0000 (0:00:00.305) 0:00:06.348 ******* 2026-03-18 03:48:03.046454 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046474 | orchestrator | 2026-03-18 03:48:03.046492 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:03.046508 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.259) 0:00:06.608 ******* 2026-03-18 03:48:03.046535 | orchestrator | 2026-03-18 03:48:03.046562 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:03.046581 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.071) 0:00:06.679 ******* 2026-03-18 03:48:03.046601 | orchestrator | 2026-03-18 03:48:03.046621 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:03.046640 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.073) 0:00:06.753 ******* 2026-03-18 03:48:03.046658 | orchestrator | 2026-03-18 03:48:03.046677 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:48:03.046695 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.074) 0:00:06.828 ******* 2026-03-18 03:48:03.046714 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046725 | orchestrator | 2026-03-18 03:48:03.046736 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-18 03:48:03.046747 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.262) 0:00:07.091 ******* 2026-03-18 03:48:03.046764 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.046782 | orchestrator | 2026-03-18 03:48:03.046829 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-18 03:48:03.046850 | orchestrator | Wednesday 18 March 2026 03:47:53 +0000 (0:00:00.238) 0:00:07.329 ******* 2026-03-18 03:48:03.046869 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.046885 | orchestrator | 2026-03-18 03:48:03.046896 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-18 03:48:03.046913 | orchestrator | Wednesday 18 March 2026 03:47:54 +0000 (0:00:00.110) 0:00:07.440 ******* 2026-03-18 03:48:03.046931 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:48:03.046956 | orchestrator | 2026-03-18 03:48:03.047004 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-18 03:48:03.047026 | orchestrator | Wednesday 18 March 2026 03:47:55 +0000 (0:00:01.709) 0:00:09.149 ******* 2026-03-18 03:48:03.047046 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047083 | orchestrator | 2026-03-18 03:48:03.047104 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-18 03:48:03.047122 | orchestrator | Wednesday 18 March 2026 03:47:56 +0000 (0:00:00.521) 0:00:09.671 ******* 2026-03-18 03:48:03.047143 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.047161 | orchestrator | 2026-03-18 03:48:03.047180 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-18 03:48:03.047198 | orchestrator | Wednesday 18 March 2026 03:47:56 +0000 (0:00:00.127) 0:00:09.798 ******* 2026-03-18 03:48:03.047217 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047228 | orchestrator | 2026-03-18 03:48:03.047239 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-18 03:48:03.047250 | orchestrator | Wednesday 18 March 2026 03:47:56 +0000 (0:00:00.327) 0:00:10.126 ******* 2026-03-18 03:48:03.047285 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047305 | orchestrator | 2026-03-18 03:48:03.047324 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-18 03:48:03.047343 | orchestrator | Wednesday 18 March 2026 03:47:57 +0000 (0:00:00.317) 0:00:10.443 ******* 2026-03-18 03:48:03.047363 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.047382 | orchestrator | 2026-03-18 03:48:03.047400 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-18 03:48:03.047412 | orchestrator | Wednesday 18 March 2026 03:47:57 +0000 (0:00:00.115) 0:00:10.559 ******* 2026-03-18 03:48:03.047422 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047433 | orchestrator | 2026-03-18 03:48:03.047444 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-18 03:48:03.047456 | orchestrator | Wednesday 18 March 2026 03:47:57 +0000 (0:00:00.145) 0:00:10.704 ******* 2026-03-18 03:48:03.047474 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047493 | orchestrator | 2026-03-18 03:48:03.047512 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-18 03:48:03.047527 | orchestrator | Wednesday 18 March 2026 03:47:57 +0000 (0:00:00.141) 0:00:10.845 ******* 2026-03-18 03:48:03.047538 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:48:03.047549 | orchestrator | 2026-03-18 03:48:03.047560 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-18 03:48:03.047570 | orchestrator | Wednesday 18 March 2026 03:47:58 +0000 (0:00:01.394) 0:00:12.239 ******* 2026-03-18 03:48:03.047581 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047592 | orchestrator | 2026-03-18 03:48:03.047603 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-18 03:48:03.047622 | orchestrator | Wednesday 18 March 2026 03:47:59 +0000 (0:00:00.310) 0:00:12.550 ******* 2026-03-18 03:48:03.047642 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.047660 | orchestrator | 2026-03-18 03:48:03.047679 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-18 03:48:03.047691 | orchestrator | Wednesday 18 March 2026 03:47:59 +0000 (0:00:00.150) 0:00:12.700 ******* 2026-03-18 03:48:03.047701 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:03.047712 | orchestrator | 2026-03-18 03:48:03.047723 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-18 03:48:03.047744 | orchestrator | Wednesday 18 March 2026 03:47:59 +0000 (0:00:00.143) 0:00:12.843 ******* 2026-03-18 03:48:03.047763 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.047782 | orchestrator | 2026-03-18 03:48:03.047801 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-18 03:48:03.047817 | orchestrator | Wednesday 18 March 2026 03:47:59 +0000 (0:00:00.134) 0:00:12.977 ******* 2026-03-18 03:48:03.047836 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.047856 | orchestrator | 2026-03-18 03:48:03.047874 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-18 03:48:03.047890 | orchestrator | Wednesday 18 March 2026 03:47:59 +0000 (0:00:00.339) 0:00:13.317 ******* 2026-03-18 03:48:03.047901 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.047933 | orchestrator | 2026-03-18 03:48:03.047953 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-18 03:48:03.047972 | orchestrator | Wednesday 18 March 2026 03:48:00 +0000 (0:00:00.272) 0:00:13.589 ******* 2026-03-18 03:48:03.048013 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:03.048024 | orchestrator | 2026-03-18 03:48:03.048039 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:48:03.048059 | orchestrator | Wednesday 18 March 2026 03:48:00 +0000 (0:00:00.287) 0:00:13.876 ******* 2026-03-18 03:48:03.048078 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.048097 | orchestrator | 2026-03-18 03:48:03.048117 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:48:03.048136 | orchestrator | Wednesday 18 March 2026 03:48:02 +0000 (0:00:01.801) 0:00:15.678 ******* 2026-03-18 03:48:03.048149 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.048159 | orchestrator | 2026-03-18 03:48:03.048170 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:48:03.048181 | orchestrator | Wednesday 18 March 2026 03:48:02 +0000 (0:00:00.267) 0:00:15.945 ******* 2026-03-18 03:48:03.048191 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:03.048208 | orchestrator | 2026-03-18 03:48:03.048240 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:05.778617 | orchestrator | Wednesday 18 March 2026 03:48:02 +0000 (0:00:00.282) 0:00:16.228 ******* 2026-03-18 03:48:05.778726 | orchestrator | 2026-03-18 03:48:05.778747 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:05.778761 | orchestrator | Wednesday 18 March 2026 03:48:02 +0000 (0:00:00.073) 0:00:16.301 ******* 2026-03-18 03:48:05.778772 | orchestrator | 2026-03-18 03:48:05.778784 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:05.778795 | orchestrator | Wednesday 18 March 2026 03:48:02 +0000 (0:00:00.073) 0:00:16.374 ******* 2026-03-18 03:48:05.778806 | orchestrator | 2026-03-18 03:48:05.778817 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-18 03:48:05.778828 | orchestrator | Wednesday 18 March 2026 03:48:03 +0000 (0:00:00.079) 0:00:16.454 ******* 2026-03-18 03:48:05.778840 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:05.778851 | orchestrator | 2026-03-18 03:48:05.778862 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:48:05.778873 | orchestrator | Wednesday 18 March 2026 03:48:04 +0000 (0:00:01.534) 0:00:17.989 ******* 2026-03-18 03:48:05.778885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-18 03:48:05.778897 | orchestrator |  "msg": [ 2026-03-18 03:48:05.778912 | orchestrator |  "Validator run completed.", 2026-03-18 03:48:05.778925 | orchestrator |  "You can find the report file here:", 2026-03-18 03:48:05.778937 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-18T03:47:47+00:00-report.json", 2026-03-18 03:48:05.778950 | orchestrator |  "on the following host:", 2026-03-18 03:48:05.778961 | orchestrator |  "testbed-manager" 2026-03-18 03:48:05.779003 | orchestrator |  ] 2026-03-18 03:48:05.779019 | orchestrator | } 2026-03-18 03:48:05.779030 | orchestrator | 2026-03-18 03:48:05.779043 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:48:05.779056 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-18 03:48:05.779070 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:48:05.779083 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:48:05.779124 | orchestrator | 2026-03-18 03:48:05.779137 | orchestrator | 2026-03-18 03:48:05.779148 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:48:05.779162 | orchestrator | Wednesday 18 March 2026 03:48:05 +0000 (0:00:00.859) 0:00:18.849 ******* 2026-03-18 03:48:05.779175 | orchestrator | =============================================================================== 2026-03-18 03:48:05.779188 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-03-18 03:48:05.779202 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.71s 2026-03-18 03:48:05.779216 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-03-18 03:48:05.779230 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2026-03-18 03:48:05.779244 | orchestrator | Get container info ------------------------------------------------------ 1.12s 2026-03-18 03:48:05.779256 | orchestrator | Create report output directory ------------------------------------------ 1.03s 2026-03-18 03:48:05.779267 | orchestrator | Print report file information ------------------------------------------- 0.86s 2026-03-18 03:48:05.779279 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-18 03:48:05.779289 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-03-18 03:48:05.779299 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-03-18 03:48:05.779309 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.49s 2026-03-18 03:48:05.779319 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-03-18 03:48:05.779330 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-03-18 03:48:05.779341 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2026-03-18 03:48:05.779351 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-03-18 03:48:05.779363 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-18 03:48:05.779373 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-03-18 03:48:05.779384 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-18 03:48:05.779394 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-03-18 03:48:05.779405 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2026-03-18 03:48:06.085269 | orchestrator | + osism validate ceph-mgrs 2026-03-18 03:48:37.785486 | orchestrator | 2026-03-18 03:48:37.785635 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-18 03:48:37.785667 | orchestrator | 2026-03-18 03:48:37.785679 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-18 03:48:37.785690 | orchestrator | Wednesday 18 March 2026 03:48:22 +0000 (0:00:00.436) 0:00:00.436 ******* 2026-03-18 03:48:37.785701 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.785712 | orchestrator | 2026-03-18 03:48:37.785722 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-18 03:48:37.785732 | orchestrator | Wednesday 18 March 2026 03:48:23 +0000 (0:00:00.828) 0:00:01.264 ******* 2026-03-18 03:48:37.785761 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.785772 | orchestrator | 2026-03-18 03:48:37.785782 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-18 03:48:37.785792 | orchestrator | Wednesday 18 March 2026 03:48:24 +0000 (0:00:00.995) 0:00:02.259 ******* 2026-03-18 03:48:37.785802 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.785814 | orchestrator | 2026-03-18 03:48:37.785824 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-18 03:48:37.785834 | orchestrator | Wednesday 18 March 2026 03:48:24 +0000 (0:00:00.136) 0:00:02.396 ******* 2026-03-18 03:48:37.785844 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.785875 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:37.785886 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:37.785895 | orchestrator | 2026-03-18 03:48:37.785905 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-18 03:48:37.785916 | orchestrator | Wednesday 18 March 2026 03:48:25 +0000 (0:00:00.311) 0:00:02.708 ******* 2026-03-18 03:48:37.785925 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.785935 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:37.785945 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:37.785955 | orchestrator | 2026-03-18 03:48:37.786004 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-18 03:48:37.786067 | orchestrator | Wednesday 18 March 2026 03:48:26 +0000 (0:00:01.053) 0:00:03.761 ******* 2026-03-18 03:48:37.786080 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786090 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:48:37.786100 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:48:37.786110 | orchestrator | 2026-03-18 03:48:37.786120 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-18 03:48:37.786129 | orchestrator | Wednesday 18 March 2026 03:48:26 +0000 (0:00:00.285) 0:00:04.047 ******* 2026-03-18 03:48:37.786140 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786150 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:37.786160 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:37.786169 | orchestrator | 2026-03-18 03:48:37.786179 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:48:37.786189 | orchestrator | Wednesday 18 March 2026 03:48:26 +0000 (0:00:00.538) 0:00:04.585 ******* 2026-03-18 03:48:37.786199 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786209 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:37.786218 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:37.786228 | orchestrator | 2026-03-18 03:48:37.786238 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-18 03:48:37.786248 | orchestrator | Wednesday 18 March 2026 03:48:27 +0000 (0:00:00.358) 0:00:04.943 ******* 2026-03-18 03:48:37.786258 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786268 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:48:37.786278 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:48:37.786288 | orchestrator | 2026-03-18 03:48:37.786298 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-18 03:48:37.786308 | orchestrator | Wednesday 18 March 2026 03:48:27 +0000 (0:00:00.322) 0:00:05.266 ******* 2026-03-18 03:48:37.786317 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786327 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:48:37.786337 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:48:37.786346 | orchestrator | 2026-03-18 03:48:37.786356 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:48:37.786366 | orchestrator | Wednesday 18 March 2026 03:48:28 +0000 (0:00:00.539) 0:00:05.806 ******* 2026-03-18 03:48:37.786376 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786386 | orchestrator | 2026-03-18 03:48:37.786396 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:48:37.786406 | orchestrator | Wednesday 18 March 2026 03:48:28 +0000 (0:00:00.251) 0:00:06.058 ******* 2026-03-18 03:48:37.786416 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786425 | orchestrator | 2026-03-18 03:48:37.786441 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:48:37.786451 | orchestrator | Wednesday 18 March 2026 03:48:28 +0000 (0:00:00.260) 0:00:06.319 ******* 2026-03-18 03:48:37.786461 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786471 | orchestrator | 2026-03-18 03:48:37.786481 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.786491 | orchestrator | Wednesday 18 March 2026 03:48:28 +0000 (0:00:00.253) 0:00:06.572 ******* 2026-03-18 03:48:37.786501 | orchestrator | 2026-03-18 03:48:37.786510 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.786529 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.073) 0:00:06.645 ******* 2026-03-18 03:48:37.786538 | orchestrator | 2026-03-18 03:48:37.786548 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.786558 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.072) 0:00:06.718 ******* 2026-03-18 03:48:37.786568 | orchestrator | 2026-03-18 03:48:37.786578 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:48:37.786588 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.076) 0:00:06.795 ******* 2026-03-18 03:48:37.786598 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786607 | orchestrator | 2026-03-18 03:48:37.786617 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-18 03:48:37.786627 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.267) 0:00:07.062 ******* 2026-03-18 03:48:37.786637 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786647 | orchestrator | 2026-03-18 03:48:37.786677 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-18 03:48:37.786687 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.247) 0:00:07.310 ******* 2026-03-18 03:48:37.786697 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786707 | orchestrator | 2026-03-18 03:48:37.786717 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-18 03:48:37.786727 | orchestrator | Wednesday 18 March 2026 03:48:29 +0000 (0:00:00.127) 0:00:07.437 ******* 2026-03-18 03:48:37.786736 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:48:37.786746 | orchestrator | 2026-03-18 03:48:37.786756 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-18 03:48:37.786766 | orchestrator | Wednesday 18 March 2026 03:48:31 +0000 (0:00:02.112) 0:00:09.550 ******* 2026-03-18 03:48:37.786776 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786785 | orchestrator | 2026-03-18 03:48:37.786795 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-18 03:48:37.786805 | orchestrator | Wednesday 18 March 2026 03:48:32 +0000 (0:00:00.438) 0:00:09.989 ******* 2026-03-18 03:48:37.786815 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786825 | orchestrator | 2026-03-18 03:48:37.786834 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-18 03:48:37.786844 | orchestrator | Wednesday 18 March 2026 03:48:32 +0000 (0:00:00.334) 0:00:10.323 ******* 2026-03-18 03:48:37.786854 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.786864 | orchestrator | 2026-03-18 03:48:37.786874 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-18 03:48:37.786884 | orchestrator | Wednesday 18 March 2026 03:48:32 +0000 (0:00:00.139) 0:00:10.463 ******* 2026-03-18 03:48:37.786894 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:48:37.786903 | orchestrator | 2026-03-18 03:48:37.786913 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-18 03:48:37.786923 | orchestrator | Wednesday 18 March 2026 03:48:33 +0000 (0:00:00.141) 0:00:10.604 ******* 2026-03-18 03:48:37.786933 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.786943 | orchestrator | 2026-03-18 03:48:37.786952 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-18 03:48:37.786981 | orchestrator | Wednesday 18 March 2026 03:48:33 +0000 (0:00:00.345) 0:00:10.950 ******* 2026-03-18 03:48:37.786992 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:48:37.787001 | orchestrator | 2026-03-18 03:48:37.787011 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:48:37.787021 | orchestrator | Wednesday 18 March 2026 03:48:33 +0000 (0:00:00.247) 0:00:11.197 ******* 2026-03-18 03:48:37.787031 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.787040 | orchestrator | 2026-03-18 03:48:37.787050 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:48:37.787060 | orchestrator | Wednesday 18 March 2026 03:48:34 +0000 (0:00:01.327) 0:00:12.525 ******* 2026-03-18 03:48:37.787076 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.787086 | orchestrator | 2026-03-18 03:48:37.787095 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:48:37.787105 | orchestrator | Wednesday 18 March 2026 03:48:35 +0000 (0:00:00.321) 0:00:12.846 ******* 2026-03-18 03:48:37.787115 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.787125 | orchestrator | 2026-03-18 03:48:37.787138 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.787153 | orchestrator | Wednesday 18 March 2026 03:48:35 +0000 (0:00:00.287) 0:00:13.133 ******* 2026-03-18 03:48:37.787168 | orchestrator | 2026-03-18 03:48:37.787178 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.787188 | orchestrator | Wednesday 18 March 2026 03:48:35 +0000 (0:00:00.070) 0:00:13.204 ******* 2026-03-18 03:48:37.787198 | orchestrator | 2026-03-18 03:48:37.787207 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:48:37.787217 | orchestrator | Wednesday 18 March 2026 03:48:35 +0000 (0:00:00.069) 0:00:13.274 ******* 2026-03-18 03:48:37.787227 | orchestrator | 2026-03-18 03:48:37.787237 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-18 03:48:37.787246 | orchestrator | Wednesday 18 March 2026 03:48:35 +0000 (0:00:00.276) 0:00:13.550 ******* 2026-03-18 03:48:37.787256 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:37.787266 | orchestrator | 2026-03-18 03:48:37.787281 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:48:37.787291 | orchestrator | Wednesday 18 March 2026 03:48:37 +0000 (0:00:01.370) 0:00:14.921 ******* 2026-03-18 03:48:37.787301 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-18 03:48:37.787318 | orchestrator |  "msg": [ 2026-03-18 03:48:37.787331 | orchestrator |  "Validator run completed.", 2026-03-18 03:48:37.787341 | orchestrator |  "You can find the report file here:", 2026-03-18 03:48:37.787351 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-18T03:48:23+00:00-report.json", 2026-03-18 03:48:37.787362 | orchestrator |  "on the following host:", 2026-03-18 03:48:37.787372 | orchestrator |  "testbed-manager" 2026-03-18 03:48:37.787381 | orchestrator |  ] 2026-03-18 03:48:37.787392 | orchestrator | } 2026-03-18 03:48:37.787402 | orchestrator | 2026-03-18 03:48:37.787412 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:48:37.787422 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-18 03:48:37.787434 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:48:37.787452 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:48:38.117227 | orchestrator | 2026-03-18 03:48:38.117312 | orchestrator | 2026-03-18 03:48:38.117324 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:48:38.117334 | orchestrator | Wednesday 18 March 2026 03:48:37 +0000 (0:00:00.439) 0:00:15.361 ******* 2026-03-18 03:48:38.117341 | orchestrator | =============================================================================== 2026-03-18 03:48:38.117349 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.11s 2026-03-18 03:48:38.117356 | orchestrator | Write report file ------------------------------------------------------- 1.37s 2026-03-18 03:48:38.117364 | orchestrator | Aggregate test results step one ----------------------------------------- 1.33s 2026-03-18 03:48:38.117371 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-03-18 03:48:38.117378 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-03-18 03:48:38.117407 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-18 03:48:38.117414 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.54s 2026-03-18 03:48:38.117422 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-03-18 03:48:38.117429 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-03-18 03:48:38.117436 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.44s 2026-03-18 03:48:38.117444 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-03-18 03:48:38.117451 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-03-18 03:48:38.117458 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.35s 2026-03-18 03:48:38.117465 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-03-18 03:48:38.117472 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.32s 2026-03-18 03:48:38.117479 | orchestrator | Aggregate test results step two ----------------------------------------- 0.32s 2026-03-18 03:48:38.117487 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-18 03:48:38.117494 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-03-18 03:48:38.117501 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-03-18 03:48:38.117508 | orchestrator | Print report file information ------------------------------------------- 0.27s 2026-03-18 03:48:38.443152 | orchestrator | + osism validate ceph-osds 2026-03-18 03:48:59.851758 | orchestrator | 2026-03-18 03:48:59.851894 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-18 03:48:59.851925 | orchestrator | 2026-03-18 03:48:59.851998 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-18 03:48:59.852021 | orchestrator | Wednesday 18 March 2026 03:48:55 +0000 (0:00:00.468) 0:00:00.468 ******* 2026-03-18 03:48:59.852039 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:59.852056 | orchestrator | 2026-03-18 03:48:59.852073 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-18 03:48:59.852090 | orchestrator | Wednesday 18 March 2026 03:48:56 +0000 (0:00:00.862) 0:00:01.331 ******* 2026-03-18 03:48:59.852108 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:59.852126 | orchestrator | 2026-03-18 03:48:59.852144 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-18 03:48:59.852162 | orchestrator | Wednesday 18 March 2026 03:48:56 +0000 (0:00:00.544) 0:00:01.876 ******* 2026-03-18 03:48:59.852180 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:48:59.852197 | orchestrator | 2026-03-18 03:48:59.852215 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-18 03:48:59.852233 | orchestrator | Wednesday 18 March 2026 03:48:57 +0000 (0:00:00.701) 0:00:02.577 ******* 2026-03-18 03:48:59.852252 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:48:59.852273 | orchestrator | 2026-03-18 03:48:59.852291 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-18 03:48:59.852311 | orchestrator | Wednesday 18 March 2026 03:48:57 +0000 (0:00:00.132) 0:00:02.710 ******* 2026-03-18 03:48:59.852330 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:48:59.852348 | orchestrator | 2026-03-18 03:48:59.852368 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-18 03:48:59.852386 | orchestrator | Wednesday 18 March 2026 03:48:57 +0000 (0:00:00.128) 0:00:02.838 ******* 2026-03-18 03:48:59.852406 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:48:59.852423 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:48:59.852442 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:48:59.852461 | orchestrator | 2026-03-18 03:48:59.852479 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-18 03:48:59.852537 | orchestrator | Wednesday 18 March 2026 03:48:57 +0000 (0:00:00.334) 0:00:03.173 ******* 2026-03-18 03:48:59.852557 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:48:59.852577 | orchestrator | 2026-03-18 03:48:59.852597 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-18 03:48:59.852617 | orchestrator | Wednesday 18 March 2026 03:48:58 +0000 (0:00:00.160) 0:00:03.334 ******* 2026-03-18 03:48:59.852635 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:48:59.852653 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:48:59.852671 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:48:59.852688 | orchestrator | 2026-03-18 03:48:59.852705 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-18 03:48:59.852723 | orchestrator | Wednesday 18 March 2026 03:48:58 +0000 (0:00:00.378) 0:00:03.712 ******* 2026-03-18 03:48:59.852740 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:48:59.852758 | orchestrator | 2026-03-18 03:48:59.852776 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:48:59.852794 | orchestrator | Wednesday 18 March 2026 03:48:59 +0000 (0:00:00.798) 0:00:04.510 ******* 2026-03-18 03:48:59.852814 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:48:59.852831 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:48:59.852849 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:48:59.852867 | orchestrator | 2026-03-18 03:48:59.852884 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-18 03:48:59.852902 | orchestrator | Wednesday 18 March 2026 03:48:59 +0000 (0:00:00.301) 0:00:04.812 ******* 2026-03-18 03:48:59.852977 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c6cf2e4bac80b3684aaadc81342c2387710a397399fe316dd1ce7a2b14bbbf76', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-18 03:48:59.853005 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3a1504af5ee42a6ed6f92928c93799b3b347d7b85d69c5f0db96259a23c33edb', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:48:59.853026 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c3ac09f9ecb742bf167e03ec26e7369ed87fbdf959e658fa987c5174a295a171', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:48:59.853046 | orchestrator | skipping: [testbed-node-3] => (item={'id': '90b7e652867a2860d759e7ce8386a686d8d62a4f6e30e027483fdc8316d963aa', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-18 03:48:59.853064 | orchestrator | skipping: [testbed-node-3] => (item={'id': '458e7808338f7f79f05be14e825814aa03a78799fd0b0a1113f8a528c90a15d3', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:48:59.853169 | orchestrator | skipping: [testbed-node-3] => (item={'id': '310adebea9d14f830ce8540d0db022bcf64b262c301e34235636dc6abd90cce1', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:48:59.853193 | orchestrator | skipping: [testbed-node-3] => (item={'id': '921db54a6c65fc9a4ef33b5921fb75111ca490cf0845b48f0321c12ad2185a18', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-18 03:48:59.853212 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bee9f3ff393dfa606a4bebb5110849edde7778f68dedbb188b9fe7727e98b4fd', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-18 03:48:59.853247 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dbe85473ce55a297e47d337ded696deb7e6b0cc0e844b8bc9a4d43f6ee120a04', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:48:59.853272 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2616193b9ce96161ca1a6d7294a43195157a3e3c8ea199eadc22f3b3577f3b72', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:48:59.853311 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f0203ac8230d9b5848cf40d7061f3015dd6ac9a252d07ec4893148a269a6c3fd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:48:59.853332 | orchestrator | ok: [testbed-node-3] => (item={'id': 'efc68cbce977151f62b6217d2cf53057008f69e750ad0a601d1619173dfbcfe7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:48:59.853352 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f1be4cf340e06a153e4e5cec759ae6a4b3363232011ad135f7d6d986e14cdbd3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:48:59.853370 | orchestrator | skipping: [testbed-node-3] => (item={'id': '75e652035fae79433e253273fcdb958bb32439f52d6fea9a95256ea528d9faec', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:48:59.853389 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f17655f6f66cedc6f1a9473444f318dd51f87654412ad1cd8794b56f0f54671e', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:48:59.853409 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ffd6f9aa3c69926ea7e3dd838f6d21f5bb28ece0643b3c7110ca9ca52bba0f9d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:48:59.853428 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f1af5f4b821d0b3d02b5316c88ec6588f986855b8ade888cab89f3d23e2c3df2', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:48:59.853448 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9cc5f1edad947f00e79452a34aa541f1083efdcb3c8c1d07b98782916696d749', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:48:59.853465 | orchestrator | skipping: [testbed-node-3] => (item={'id': '667b418709bbf804a372c6ef72bc56a75df0c60c12dd6e97193efdc3d345b007', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:48:59.853485 | orchestrator | skipping: [testbed-node-4] => (item={'id': '057e87957cc338aca2e500c0650c8772b4ff2cc4cf1036134fe9d1df5b9362db', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-18 03:48:59.853517 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18656f6f78c81171feea61774dd4f247033c87212b1e35a2bb04a52c11c96104', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:49:00.119029 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1176ff1262bfec36e7d6eb07ebde3d665c357952a63c8f493a288ab34be61b83', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:49:00.119159 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4bfb3757a51788c333d74721929ed4dc0e831c732bba81629adeb5ba56b75e1a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-18 03:49:00.119176 | orchestrator | skipping: [testbed-node-4] => (item={'id': '873efa2f2cfdd55fc79767a1f67684f5b995af04fcb0484a7c974a6655d3d348', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:49:00.119204 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5023086f7d8de766d41cb7ac0a05fbaba0b030fc8e3b42f205c7f2817e0d1a62', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:49:00.119216 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5fb65045d87a3685e0b4814a53978a592a5e4037c70fb57ca07f23e164804257', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-18 03:49:00.119227 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2d15f4127d70194d9b0cab467dccc20c73fa9157fe95474486155f02c0c936d2', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-18 03:49:00.119238 | orchestrator | skipping: [testbed-node-4] => (item={'id': '21496fe4a9e6506403d5aee5566656556166b4153cbf90f839963a00d9418d78', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119251 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd84db3c3b1f9ae47419fe292ae38a499f54169a33211f032a93f48ad6d3f48fa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119262 | orchestrator | skipping: [testbed-node-4] => (item={'id': '781127834bb80daa34bda7855bf7f223cb48f179d26e469facbbe599d9fce811', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119275 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e0b71bb3c160e898b8ef0eb5b9255641d13e107f3ca14294f6e6c2dd405db74d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:49:00.119287 | orchestrator | ok: [testbed-node-4] => (item={'id': '3c6ab7979eaf0cf7324db0a5de19ffaed234712bd81a3703d23d4fa762de2a3f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:49:00.119298 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8ecb19dfb4261a4261429cabdf2db903e02a191a8f5a6bf84282d061527ef828', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119309 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8389610d15443c1baadbd23f30c0d5003ecd472930fd6ee5516efaa1a8de9800', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:49:00.119320 | orchestrator | skipping: [testbed-node-4] => (item={'id': '46a0515cf6fb15e77b05c415d372538a202b7b114af1bcda8842452bad0b9634', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:49:00.119357 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3b411a1b5a19d6104593374ff834c8aae455c9177f5772b14aa73af0ad0b810', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:00.119370 | orchestrator | skipping: [testbed-node-4] => (item={'id': '208c2278fd3e1101e01549419e37330d4738f809695b67ee27dd166406112170', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:00.119381 | orchestrator | skipping: [testbed-node-4] => (item={'id': '70c6560b65cb9998026114702dce4211adda46455540302e8080f9bb422324be', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:00.119392 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f20413692893a5878333b4b228a6136292c7ffcba1973c412e82a6a81a1815b', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-18 03:49:00.119408 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd5256cf2ba57e34c87ca6eeb809898d5bead470e4a6c117f11dabee7d75a9b0c', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:49:00.119420 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2bad15fd8cdf09e35ffd430e2199cd46332b7291a272b8fd56683cad77dc87b7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-18 03:49:00.119431 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0e9d5b7dfd538c57b2388954595cfea572f9192f7d7314cdadeca49e14f23762', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-18 03:49:00.119442 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6dc3c90ade95a3082f0bbd55feb0d0a54716da7532c6ae40a370d2fd470ff906', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:49:00.119453 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9eec4e3adc574511b944276fa45f58044e17b1b50af6960a49c6c97a41df71de', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-18 03:49:00.119464 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd15a0837327d88a6fb2a76cc19ae86b5d8cfb814a7f16a3885cb8f970afd08d5', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-18 03:49:00.119475 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05a5b4164ba9b146da4b3bf002eedc26b5661664bfac1f4ca670483cb48d0110', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-18 03:49:00.119488 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cec6b580de85c9fb0ed24975efdafe3fa54ca8e91fdc3b0c78f2f2d368c2591f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119502 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a2d72394f204a449d4c76ffc9c1ba58e4b71bb358ec6914b72a3cc7f1aca686', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119530 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b53c49fa5f4b6dcef0fed955d0b9496334e132f13c4b5385fc02bda65948e42', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:00.119543 | orchestrator | ok: [testbed-node-5] => (item={'id': '9a7b71f93428cde1341adfeb3c97c6e74811d5f942402e46d02267a5a44acc70', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:49:00.119564 | orchestrator | ok: [testbed-node-5] => (item={'id': '71e674d592c4e5c82ab5112d47bfee3f6660c7d8a9e4868d67522c6d4eabad33', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-18 03:49:11.706572 | orchestrator | skipping: [testbed-node-5] => (item={'id': '99101e1d3b71635575bda64e08d5aad11d9bd54fe6973e0f9ae8efa40644cd74', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-18 03:49:11.706685 | orchestrator | skipping: [testbed-node-5] => (item={'id': '471e137f2a2ddcb2ca9470c472934a8ced6237abfa45922923793a60a2b4d445', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:49:11.706702 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ce5fed41478f1f6c4eabf90c2f430a3c3fdc1fb53a4ced190fd3065a47a71b2c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-18 03:49:11.706716 | orchestrator | skipping: [testbed-node-5] => (item={'id': '418e3a5cd00e0e7646af20e2768f89987cc1cb2054a1a175012aafc49bb9737c', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:11.706729 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5af5b88ceed2a4b82c0ed552f77ef32f0f641af9ae5f35d7f9b6a499bd487a0e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:11.706741 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de3d97ac5bb8220d2bd665907fde5c1e8da56301f9ddbeba6c312f786c512884', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-18 03:49:11.706752 | orchestrator | 2026-03-18 03:49:11.706765 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-18 03:49:11.706778 | orchestrator | Wednesday 18 March 2026 03:49:00 +0000 (0:00:00.581) 0:00:05.394 ******* 2026-03-18 03:49:11.706789 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.706801 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.706812 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.706822 | orchestrator | 2026-03-18 03:49:11.706834 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-18 03:49:11.706845 | orchestrator | Wednesday 18 March 2026 03:49:00 +0000 (0:00:00.338) 0:00:05.732 ******* 2026-03-18 03:49:11.706856 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.706867 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:11.706878 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:11.706889 | orchestrator | 2026-03-18 03:49:11.706901 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-18 03:49:11.706912 | orchestrator | Wednesday 18 March 2026 03:49:00 +0000 (0:00:00.487) 0:00:06.219 ******* 2026-03-18 03:49:11.706923 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.706934 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.707008 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.707020 | orchestrator | 2026-03-18 03:49:11.707031 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:49:11.707066 | orchestrator | Wednesday 18 March 2026 03:49:01 +0000 (0:00:00.334) 0:00:06.554 ******* 2026-03-18 03:49:11.707078 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.707089 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.707099 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.707110 | orchestrator | 2026-03-18 03:49:11.707121 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-18 03:49:11.707132 | orchestrator | Wednesday 18 March 2026 03:49:01 +0000 (0:00:00.328) 0:00:06.883 ******* 2026-03-18 03:49:11.707161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-18 03:49:11.707175 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-18 03:49:11.707186 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707197 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-18 03:49:11.707216 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-18 03:49:11.707236 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:11.707254 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-18 03:49:11.707268 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-18 03:49:11.707279 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:11.707290 | orchestrator | 2026-03-18 03:49:11.707301 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-18 03:49:11.707312 | orchestrator | Wednesday 18 March 2026 03:49:01 +0000 (0:00:00.334) 0:00:07.217 ******* 2026-03-18 03:49:11.707323 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.707334 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.707344 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.707355 | orchestrator | 2026-03-18 03:49:11.707366 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-18 03:49:11.707376 | orchestrator | Wednesday 18 March 2026 03:49:02 +0000 (0:00:00.523) 0:00:07.741 ******* 2026-03-18 03:49:11.707387 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707417 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:11.707428 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:11.707439 | orchestrator | 2026-03-18 03:49:11.707450 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-18 03:49:11.707461 | orchestrator | Wednesday 18 March 2026 03:49:02 +0000 (0:00:00.298) 0:00:08.039 ******* 2026-03-18 03:49:11.707472 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707483 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:11.707494 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:11.707504 | orchestrator | 2026-03-18 03:49:11.707515 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-18 03:49:11.707526 | orchestrator | Wednesday 18 March 2026 03:49:03 +0000 (0:00:00.295) 0:00:08.334 ******* 2026-03-18 03:49:11.707537 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.707547 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.707558 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.707569 | orchestrator | 2026-03-18 03:49:11.707580 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:49:11.707590 | orchestrator | Wednesday 18 March 2026 03:49:03 +0000 (0:00:00.334) 0:00:08.669 ******* 2026-03-18 03:49:11.707601 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707612 | orchestrator | 2026-03-18 03:49:11.707629 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:49:11.707641 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.676) 0:00:09.345 ******* 2026-03-18 03:49:11.707651 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707662 | orchestrator | 2026-03-18 03:49:11.707673 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:49:11.707692 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.290) 0:00:09.635 ******* 2026-03-18 03:49:11.707703 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707714 | orchestrator | 2026-03-18 03:49:11.707724 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:11.707735 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.260) 0:00:09.896 ******* 2026-03-18 03:49:11.707746 | orchestrator | 2026-03-18 03:49:11.707757 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:11.707768 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.070) 0:00:09.966 ******* 2026-03-18 03:49:11.707779 | orchestrator | 2026-03-18 03:49:11.707790 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:11.707800 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.071) 0:00:10.038 ******* 2026-03-18 03:49:11.707811 | orchestrator | 2026-03-18 03:49:11.707822 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:49:11.707833 | orchestrator | Wednesday 18 March 2026 03:49:04 +0000 (0:00:00.074) 0:00:10.112 ******* 2026-03-18 03:49:11.707843 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707854 | orchestrator | 2026-03-18 03:49:11.707865 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-18 03:49:11.707875 | orchestrator | Wednesday 18 March 2026 03:49:05 +0000 (0:00:00.286) 0:00:10.399 ******* 2026-03-18 03:49:11.707886 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.707897 | orchestrator | 2026-03-18 03:49:11.707908 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:49:11.707918 | orchestrator | Wednesday 18 March 2026 03:49:05 +0000 (0:00:00.285) 0:00:10.684 ******* 2026-03-18 03:49:11.707929 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.707964 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.707976 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.707987 | orchestrator | 2026-03-18 03:49:11.707997 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-18 03:49:11.708008 | orchestrator | Wednesday 18 March 2026 03:49:05 +0000 (0:00:00.299) 0:00:10.984 ******* 2026-03-18 03:49:11.708019 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.708030 | orchestrator | 2026-03-18 03:49:11.708041 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-18 03:49:11.708051 | orchestrator | Wednesday 18 March 2026 03:49:06 +0000 (0:00:00.675) 0:00:11.660 ******* 2026-03-18 03:49:11.708062 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 03:49:11.708073 | orchestrator | 2026-03-18 03:49:11.708084 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-18 03:49:11.708094 | orchestrator | Wednesday 18 March 2026 03:49:07 +0000 (0:00:01.589) 0:00:13.250 ******* 2026-03-18 03:49:11.708105 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.708116 | orchestrator | 2026-03-18 03:49:11.708127 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-18 03:49:11.708138 | orchestrator | Wednesday 18 March 2026 03:49:08 +0000 (0:00:00.157) 0:00:13.408 ******* 2026-03-18 03:49:11.708148 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.708159 | orchestrator | 2026-03-18 03:49:11.708170 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-18 03:49:11.708181 | orchestrator | Wednesday 18 March 2026 03:49:08 +0000 (0:00:00.326) 0:00:13.734 ******* 2026-03-18 03:49:11.708191 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:11.708202 | orchestrator | 2026-03-18 03:49:11.708213 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-18 03:49:11.708224 | orchestrator | Wednesday 18 March 2026 03:49:08 +0000 (0:00:00.130) 0:00:13.865 ******* 2026-03-18 03:49:11.708235 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.708245 | orchestrator | 2026-03-18 03:49:11.708256 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:49:11.708273 | orchestrator | Wednesday 18 March 2026 03:49:08 +0000 (0:00:00.134) 0:00:13.999 ******* 2026-03-18 03:49:11.708284 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:11.708295 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:11.708306 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:11.708316 | orchestrator | 2026-03-18 03:49:11.708327 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-18 03:49:11.708338 | orchestrator | Wednesday 18 March 2026 03:49:09 +0000 (0:00:00.297) 0:00:14.297 ******* 2026-03-18 03:49:11.708349 | orchestrator | changed: [testbed-node-3] 2026-03-18 03:49:11.708360 | orchestrator | changed: [testbed-node-4] 2026-03-18 03:49:11.708370 | orchestrator | changed: [testbed-node-5] 2026-03-18 03:49:22.208125 | orchestrator | 2026-03-18 03:49:22.208236 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-18 03:49:22.208252 | orchestrator | Wednesday 18 March 2026 03:49:11 +0000 (0:00:02.691) 0:00:16.989 ******* 2026-03-18 03:49:22.208264 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208276 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208287 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208298 | orchestrator | 2026-03-18 03:49:22.208309 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-18 03:49:22.208320 | orchestrator | Wednesday 18 March 2026 03:49:12 +0000 (0:00:00.319) 0:00:17.308 ******* 2026-03-18 03:49:22.208331 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208342 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208352 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208363 | orchestrator | 2026-03-18 03:49:22.208374 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-18 03:49:22.208385 | orchestrator | Wednesday 18 March 2026 03:49:12 +0000 (0:00:00.494) 0:00:17.803 ******* 2026-03-18 03:49:22.208396 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:22.208407 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:22.208418 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:22.208429 | orchestrator | 2026-03-18 03:49:22.208456 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-18 03:49:22.208467 | orchestrator | Wednesday 18 March 2026 03:49:12 +0000 (0:00:00.317) 0:00:18.121 ******* 2026-03-18 03:49:22.208478 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208489 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208499 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208510 | orchestrator | 2026-03-18 03:49:22.208520 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-18 03:49:22.208531 | orchestrator | Wednesday 18 March 2026 03:49:13 +0000 (0:00:00.564) 0:00:18.685 ******* 2026-03-18 03:49:22.208542 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:22.208553 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:22.208563 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:22.208575 | orchestrator | 2026-03-18 03:49:22.208589 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-18 03:49:22.208601 | orchestrator | Wednesday 18 March 2026 03:49:13 +0000 (0:00:00.315) 0:00:19.001 ******* 2026-03-18 03:49:22.208614 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:22.208626 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:22.208638 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:22.208649 | orchestrator | 2026-03-18 03:49:22.208662 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-18 03:49:22.208674 | orchestrator | Wednesday 18 March 2026 03:49:14 +0000 (0:00:00.341) 0:00:19.343 ******* 2026-03-18 03:49:22.208687 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208699 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208711 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208723 | orchestrator | 2026-03-18 03:49:22.208735 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-18 03:49:22.208746 | orchestrator | Wednesday 18 March 2026 03:49:14 +0000 (0:00:00.520) 0:00:19.863 ******* 2026-03-18 03:49:22.208779 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208791 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208801 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208812 | orchestrator | 2026-03-18 03:49:22.208823 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-18 03:49:22.208834 | orchestrator | Wednesday 18 March 2026 03:49:15 +0000 (0:00:00.799) 0:00:20.663 ******* 2026-03-18 03:49:22.208844 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.208855 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.208865 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.208876 | orchestrator | 2026-03-18 03:49:22.208887 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-18 03:49:22.208898 | orchestrator | Wednesday 18 March 2026 03:49:15 +0000 (0:00:00.323) 0:00:20.987 ******* 2026-03-18 03:49:22.208908 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:22.208919 | orchestrator | skipping: [testbed-node-4] 2026-03-18 03:49:22.208929 | orchestrator | skipping: [testbed-node-5] 2026-03-18 03:49:22.208975 | orchestrator | 2026-03-18 03:49:22.208986 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-18 03:49:22.208997 | orchestrator | Wednesday 18 March 2026 03:49:16 +0000 (0:00:00.326) 0:00:21.313 ******* 2026-03-18 03:49:22.209007 | orchestrator | ok: [testbed-node-3] 2026-03-18 03:49:22.209018 | orchestrator | ok: [testbed-node-4] 2026-03-18 03:49:22.209029 | orchestrator | ok: [testbed-node-5] 2026-03-18 03:49:22.209039 | orchestrator | 2026-03-18 03:49:22.209050 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-18 03:49:22.209061 | orchestrator | Wednesday 18 March 2026 03:49:16 +0000 (0:00:00.554) 0:00:21.868 ******* 2026-03-18 03:49:22.209072 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:49:22.209083 | orchestrator | 2026-03-18 03:49:22.209094 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-18 03:49:22.209104 | orchestrator | Wednesday 18 March 2026 03:49:16 +0000 (0:00:00.267) 0:00:22.136 ******* 2026-03-18 03:49:22.209115 | orchestrator | skipping: [testbed-node-3] 2026-03-18 03:49:22.209126 | orchestrator | 2026-03-18 03:49:22.209136 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-18 03:49:22.209147 | orchestrator | Wednesday 18 March 2026 03:49:17 +0000 (0:00:00.250) 0:00:22.386 ******* 2026-03-18 03:49:22.209158 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:49:22.209168 | orchestrator | 2026-03-18 03:49:22.209179 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-18 03:49:22.209190 | orchestrator | Wednesday 18 March 2026 03:49:18 +0000 (0:00:01.804) 0:00:24.191 ******* 2026-03-18 03:49:22.209200 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:49:22.209211 | orchestrator | 2026-03-18 03:49:22.209222 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-18 03:49:22.209233 | orchestrator | Wednesday 18 March 2026 03:49:19 +0000 (0:00:00.263) 0:00:24.454 ******* 2026-03-18 03:49:22.209244 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:49:22.209255 | orchestrator | 2026-03-18 03:49:22.209283 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:22.209295 | orchestrator | Wednesday 18 March 2026 03:49:19 +0000 (0:00:00.274) 0:00:24.728 ******* 2026-03-18 03:49:22.209306 | orchestrator | 2026-03-18 03:49:22.209317 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:22.209327 | orchestrator | Wednesday 18 March 2026 03:49:19 +0000 (0:00:00.077) 0:00:24.806 ******* 2026-03-18 03:49:22.209338 | orchestrator | 2026-03-18 03:49:22.209348 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-18 03:49:22.209359 | orchestrator | Wednesday 18 March 2026 03:49:19 +0000 (0:00:00.075) 0:00:24.881 ******* 2026-03-18 03:49:22.209369 | orchestrator | 2026-03-18 03:49:22.209380 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-18 03:49:22.209391 | orchestrator | Wednesday 18 March 2026 03:49:19 +0000 (0:00:00.075) 0:00:24.957 ******* 2026-03-18 03:49:22.209407 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-18 03:49:22.209418 | orchestrator | 2026-03-18 03:49:22.209429 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-18 03:49:22.209440 | orchestrator | Wednesday 18 March 2026 03:49:21 +0000 (0:00:01.578) 0:00:26.536 ******* 2026-03-18 03:49:22.209456 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-18 03:49:22.209467 | orchestrator |  "msg": [ 2026-03-18 03:49:22.209478 | orchestrator |  "Validator run completed.", 2026-03-18 03:49:22.209489 | orchestrator |  "You can find the report file here:", 2026-03-18 03:49:22.209500 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-18T03:48:55+00:00-report.json", 2026-03-18 03:49:22.209512 | orchestrator |  "on the following host:", 2026-03-18 03:49:22.209523 | orchestrator |  "testbed-manager" 2026-03-18 03:49:22.209534 | orchestrator |  ] 2026-03-18 03:49:22.209545 | orchestrator | } 2026-03-18 03:49:22.209556 | orchestrator | 2026-03-18 03:49:22.209567 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:49:22.209579 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 03:49:22.209593 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-18 03:49:22.209613 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-18 03:49:22.209628 | orchestrator | 2026-03-18 03:49:22.209651 | orchestrator | 2026-03-18 03:49:22.209676 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:49:22.209694 | orchestrator | Wednesday 18 March 2026 03:49:21 +0000 (0:00:00.599) 0:00:27.136 ******* 2026-03-18 03:49:22.209711 | orchestrator | =============================================================================== 2026-03-18 03:49:22.209729 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.69s 2026-03-18 03:49:22.209745 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-03-18 03:49:22.209762 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2026-03-18 03:49:22.209779 | orchestrator | Write report file ------------------------------------------------------- 1.58s 2026-03-18 03:49:22.209798 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-18 03:49:22.209817 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.80s 2026-03-18 03:49:22.209835 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.80s 2026-03-18 03:49:22.209854 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-03-18 03:49:22.209872 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2026-03-18 03:49:22.209892 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.68s 2026-03-18 03:49:22.209911 | orchestrator | Print report file information ------------------------------------------- 0.60s 2026-03-18 03:49:22.209928 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.58s 2026-03-18 03:49:22.209972 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.56s 2026-03-18 03:49:22.209991 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2026-03-18 03:49:22.210010 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2026-03-18 03:49:22.210100 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.52s 2026-03-18 03:49:22.210119 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2026-03-18 03:49:22.210138 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-03-18 03:49:22.210172 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2026-03-18 03:49:22.210192 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.38s 2026-03-18 03:49:22.528444 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-18 03:49:22.538632 | orchestrator | + set -e 2026-03-18 03:49:22.538722 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 03:49:22.539906 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 03:49:22.539987 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 03:49:22.540000 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 03:49:22.540012 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 03:49:22.540023 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 03:49:22.540036 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 03:49:22.540047 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:49:22.540058 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:49:22.540069 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 03:49:22.540080 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 03:49:22.540091 | orchestrator | ++ export ARA=false 2026-03-18 03:49:22.540102 | orchestrator | ++ ARA=false 2026-03-18 03:49:22.540114 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 03:49:22.540125 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 03:49:22.540135 | orchestrator | ++ export TEMPEST=false 2026-03-18 03:49:22.540146 | orchestrator | ++ TEMPEST=false 2026-03-18 03:49:22.540157 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 03:49:22.540168 | orchestrator | ++ IS_ZUUL=true 2026-03-18 03:49:22.540179 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:49:22.540190 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 03:49:22.540201 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 03:49:22.540213 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 03:49:22.540232 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 03:49:22.540251 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 03:49:22.540269 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 03:49:22.540287 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 03:49:22.540560 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 03:49:22.540589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 03:49:22.540607 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-18 03:49:22.540625 | orchestrator | + source /etc/os-release 2026-03-18 03:49:22.540643 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-18 03:49:22.540660 | orchestrator | ++ NAME=Ubuntu 2026-03-18 03:49:22.540676 | orchestrator | ++ VERSION_ID=24.04 2026-03-18 03:49:22.540695 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-18 03:49:22.540712 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-18 03:49:22.540730 | orchestrator | ++ ID=ubuntu 2026-03-18 03:49:22.540747 | orchestrator | ++ ID_LIKE=debian 2026-03-18 03:49:22.540766 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-18 03:49:22.540784 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-18 03:49:22.540803 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-18 03:49:22.540822 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-18 03:49:22.540843 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-18 03:49:22.540861 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-18 03:49:22.540876 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-18 03:49:22.540888 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-18 03:49:22.540900 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-18 03:49:22.558143 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-18 03:49:48.694298 | orchestrator | 2026-03-18 03:49:48.694409 | orchestrator | # Status of Elasticsearch 2026-03-18 03:49:48.694426 | orchestrator | 2026-03-18 03:49:48.694438 | orchestrator | + pushd /opt/configuration/contrib 2026-03-18 03:49:48.694451 | orchestrator | + echo 2026-03-18 03:49:48.694462 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-18 03:49:48.694473 | orchestrator | + echo 2026-03-18 03:49:48.694484 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-18 03:49:48.886808 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-18 03:49:48.887021 | orchestrator | 2026-03-18 03:49:48.887043 | orchestrator | # Status of MariaDB 2026-03-18 03:49:48.887056 | orchestrator | 2026-03-18 03:49:48.887067 | orchestrator | + echo 2026-03-18 03:49:48.887079 | orchestrator | + echo '# Status of MariaDB' 2026-03-18 03:49:48.887090 | orchestrator | + echo 2026-03-18 03:49:48.888030 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-18 03:49:48.930621 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 03:49:48.930700 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-18 03:49:48.930710 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-18 03:49:48.930718 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-18 03:49:48.999527 | orchestrator | Reading package lists... 2026-03-18 03:49:49.384120 | orchestrator | Building dependency tree... 2026-03-18 03:49:49.384738 | orchestrator | Reading state information... 2026-03-18 03:49:49.785837 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-18 03:49:49.785990 | orchestrator | bc set to manually installed. 2026-03-18 03:49:49.786008 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-18 03:49:50.467806 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-18 03:49:50.469218 | orchestrator | 2026-03-18 03:49:50.469276 | orchestrator | # Status of Prometheus 2026-03-18 03:49:50.469290 | orchestrator | 2026-03-18 03:49:50.469298 | orchestrator | + echo 2026-03-18 03:49:50.469303 | orchestrator | + echo '# Status of Prometheus' 2026-03-18 03:49:50.469309 | orchestrator | + echo 2026-03-18 03:49:50.469314 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-18 03:49:50.532984 | orchestrator | Unauthorized 2026-03-18 03:49:50.536687 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-18 03:49:50.596985 | orchestrator | Unauthorized 2026-03-18 03:49:50.600949 | orchestrator | 2026-03-18 03:49:50.601023 | orchestrator | # Status of RabbitMQ 2026-03-18 03:49:50.601038 | orchestrator | 2026-03-18 03:49:50.601050 | orchestrator | + echo 2026-03-18 03:49:50.601062 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-18 03:49:50.601074 | orchestrator | + echo 2026-03-18 03:49:50.601442 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-18 03:49:50.659360 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 03:49:50.659450 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-18 03:49:50.659467 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-18 03:49:51.129662 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-18 03:49:51.139870 | orchestrator | 2026-03-18 03:49:51.140026 | orchestrator | # Status of Redis 2026-03-18 03:49:51.140044 | orchestrator | 2026-03-18 03:49:51.140057 | orchestrator | + echo 2026-03-18 03:49:51.140068 | orchestrator | + echo '# Status of Redis' 2026-03-18 03:49:51.140080 | orchestrator | + echo 2026-03-18 03:49:51.140093 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-18 03:49:51.149572 | orchestrator | TCP OK - 0.004 second response time on 192.168.16.10 port 6379|time=0.003813s;;;0.000000;10.000000 2026-03-18 03:49:51.150502 | orchestrator | 2026-03-18 03:49:51.150554 | orchestrator | # Create backup of MariaDB database 2026-03-18 03:49:51.150578 | orchestrator | + popd 2026-03-18 03:49:51.150590 | orchestrator | + echo 2026-03-18 03:49:51.150602 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-18 03:49:51.150613 | orchestrator | + echo 2026-03-18 03:49:51.150625 | orchestrator | 2026-03-18 03:49:51.150636 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-18 03:49:53.239439 | orchestrator | 2026-03-18 03:49:53 | INFO  | Task bc924885-d825-4ca1-bc02-7e60b180f048 (mariadb_backup) was prepared for execution. 2026-03-18 03:49:53.241336 | orchestrator | 2026-03-18 03:49:53 | INFO  | It takes a moment until task bc924885-d825-4ca1-bc02-7e60b180f048 (mariadb_backup) has been started and output is visible here. 2026-03-18 03:52:16.677102 | orchestrator | 2026-03-18 03:52:16.677251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 03:52:16.677272 | orchestrator | 2026-03-18 03:52:16.677306 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 03:52:16.677319 | orchestrator | Wednesday 18 March 2026 03:49:57 +0000 (0:00:00.179) 0:00:00.179 ******* 2026-03-18 03:52:16.677355 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:52:16.677368 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:52:16.677378 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:52:16.677389 | orchestrator | 2026-03-18 03:52:16.677400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 03:52:16.677411 | orchestrator | Wednesday 18 March 2026 03:49:57 +0000 (0:00:00.338) 0:00:00.517 ******* 2026-03-18 03:52:16.677423 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-18 03:52:16.677434 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-18 03:52:16.677444 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-18 03:52:16.677455 | orchestrator | 2026-03-18 03:52:16.677466 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-18 03:52:16.677477 | orchestrator | 2026-03-18 03:52:16.677488 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-18 03:52:16.677498 | orchestrator | Wednesday 18 March 2026 03:49:58 +0000 (0:00:00.627) 0:00:01.144 ******* 2026-03-18 03:52:16.677509 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 03:52:16.677520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 03:52:16.677531 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 03:52:16.677542 | orchestrator | 2026-03-18 03:52:16.677563 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 03:52:16.677583 | orchestrator | Wednesday 18 March 2026 03:49:58 +0000 (0:00:00.476) 0:00:01.621 ******* 2026-03-18 03:52:16.677602 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 03:52:16.677622 | orchestrator | 2026-03-18 03:52:16.677641 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-18 03:52:16.677661 | orchestrator | Wednesday 18 March 2026 03:49:59 +0000 (0:00:00.583) 0:00:02.205 ******* 2026-03-18 03:52:16.677678 | orchestrator | ok: [testbed-node-2] 2026-03-18 03:52:16.677697 | orchestrator | ok: [testbed-node-1] 2026-03-18 03:52:16.677716 | orchestrator | ok: [testbed-node-0] 2026-03-18 03:52:16.677735 | orchestrator | 2026-03-18 03:52:16.677754 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-18 03:52:16.677766 | orchestrator | Wednesday 18 March 2026 03:50:02 +0000 (0:00:03.221) 0:00:05.426 ******* 2026-03-18 03:52:16.677776 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-18 03:52:16.677787 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-18 03:52:16.677799 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-18 03:52:16.677810 | orchestrator | mariadb_bootstrap_restart 2026-03-18 03:52:16.677821 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:52:16.677832 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:52:16.677843 | orchestrator | changed: [testbed-node-0] 2026-03-18 03:52:16.677885 | orchestrator | 2026-03-18 03:52:16.677898 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-18 03:52:16.677909 | orchestrator | skipping: no hosts matched 2026-03-18 03:52:16.677920 | orchestrator | 2026-03-18 03:52:16.677931 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-18 03:52:16.677942 | orchestrator | skipping: no hosts matched 2026-03-18 03:52:16.677953 | orchestrator | 2026-03-18 03:52:16.677964 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-18 03:52:16.677975 | orchestrator | skipping: no hosts matched 2026-03-18 03:52:16.677985 | orchestrator | 2026-03-18 03:52:16.677996 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-18 03:52:16.678007 | orchestrator | 2026-03-18 03:52:16.678077 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-18 03:52:16.678089 | orchestrator | Wednesday 18 March 2026 03:52:15 +0000 (0:02:12.878) 0:02:18.305 ******* 2026-03-18 03:52:16.678112 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:52:16.678123 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:52:16.678134 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:52:16.678145 | orchestrator | 2026-03-18 03:52:16.678156 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-18 03:52:16.678167 | orchestrator | Wednesday 18 March 2026 03:52:15 +0000 (0:00:00.334) 0:02:18.639 ******* 2026-03-18 03:52:16.678178 | orchestrator | skipping: [testbed-node-0] 2026-03-18 03:52:16.678189 | orchestrator | skipping: [testbed-node-1] 2026-03-18 03:52:16.678199 | orchestrator | skipping: [testbed-node-2] 2026-03-18 03:52:16.678210 | orchestrator | 2026-03-18 03:52:16.678221 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:52:16.678234 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 03:52:16.678246 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 03:52:16.678257 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 03:52:16.678268 | orchestrator | 2026-03-18 03:52:16.678279 | orchestrator | 2026-03-18 03:52:16.678290 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:52:16.678301 | orchestrator | Wednesday 18 March 2026 03:52:16 +0000 (0:00:00.438) 0:02:19.077 ******* 2026-03-18 03:52:16.678312 | orchestrator | =============================================================================== 2026-03-18 03:52:16.678323 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 132.88s 2026-03-18 03:52:16.678357 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.22s 2026-03-18 03:52:16.678368 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-03-18 03:52:16.678380 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.58s 2026-03-18 03:52:16.678391 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.48s 2026-03-18 03:52:16.678402 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2026-03-18 03:52:16.678413 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-18 03:52:16.678424 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-03-18 03:52:16.991692 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-18 03:52:16.997520 | orchestrator | + set -e 2026-03-18 03:52:16.997588 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 03:52:16.998122 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 03:52:16.998160 | orchestrator | ++ INTERACTIVE=false 2026-03-18 03:52:16.998179 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 03:52:16.998199 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 03:52:16.998217 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-18 03:52:17.001502 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:52:17.009085 | orchestrator | 2026-03-18 03:52:17.009147 | orchestrator | # OpenStack endpoints 2026-03-18 03:52:17.009162 | orchestrator | 2026-03-18 03:52:17.009173 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 03:52:17.009184 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 03:52:17.009196 | orchestrator | + export OS_CLOUD=admin 2026-03-18 03:52:17.009207 | orchestrator | + OS_CLOUD=admin 2026-03-18 03:52:17.009218 | orchestrator | + echo 2026-03-18 03:52:17.009230 | orchestrator | + echo '# OpenStack endpoints' 2026-03-18 03:52:17.009240 | orchestrator | + echo 2026-03-18 03:52:17.009251 | orchestrator | + openstack endpoint list 2026-03-18 03:52:20.254421 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-18 03:52:20.254514 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-18 03:52:20.254545 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-18 03:52:20.254553 | orchestrator | | 0d48994100f74b2d8c89b2101da95161 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-18 03:52:20.254561 | orchestrator | | 0ec26bd007364961b4f30b78dd3d9e59 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-18 03:52:20.254568 | orchestrator | | 0f3b88cf368d4a5c81c8d9e75394ec8b | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-18 03:52:20.254574 | orchestrator | | 2d56a42fcedd40c9aff9ba71a2d7032a | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-18 03:52:20.254582 | orchestrator | | 2ed53f7f70c54b7f99e6cb0963d0b076 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-18 03:52:20.254590 | orchestrator | | 32d6041268ac4fbd80c1f5b1d6e3b482 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-18 03:52:20.254597 | orchestrator | | 370645f9f5bf4c079514867acb6ca931 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-18 03:52:20.254604 | orchestrator | | 3e444ab5847a4e6a8e1274abdf6ce9bf | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-18 03:52:20.254611 | orchestrator | | 3e9e044fb2c043f3bf79e98dfc13f13c | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-18 03:52:20.254618 | orchestrator | | 44ba11471d9c4ea78fcf0485ae85360d | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-18 03:52:20.254625 | orchestrator | | 63860251b2b64644b4a9b3449f083287 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-18 03:52:20.254631 | orchestrator | | 717b2d9e8b7f4495aa0d6123e0b390c2 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-18 03:52:20.254638 | orchestrator | | 807ecf172f1145b29af4dec9b469b530 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-18 03:52:20.254645 | orchestrator | | 815486b819244b0f9be1e379fe046428 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-18 03:52:20.254652 | orchestrator | | 87cddc9c09cd4f2ab430e330e97525f1 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-18 03:52:20.254659 | orchestrator | | 89b73c9d5ecf4e788723128787147a1f | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-18 03:52:20.254665 | orchestrator | | 8e3ce952027d49798080de871d34c6bb | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-18 03:52:20.254672 | orchestrator | | 93e639861f954f1dab7c5fb4785d3fe5 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-18 03:52:20.254679 | orchestrator | | 9ccbd09e03c146bbbe3fccde5b9f88e6 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-18 03:52:20.254692 | orchestrator | | a4200519ef8346c59516023693e39301 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-18 03:52:20.254712 | orchestrator | | aff572039f6944ba831013c75295f87c | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-18 03:52:20.254724 | orchestrator | | b9c9dcec22b645c8a0e63db3f49b0c2d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-18 03:52:20.254731 | orchestrator | | bbb4ea2ff8cb425988ab7a4c0102aad1 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-18 03:52:20.254737 | orchestrator | | bfd419904590439c8c531291c33cf843 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-18 03:52:20.254744 | orchestrator | | c5a83c119add4a9faaebe40704fee5b4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-18 03:52:20.254751 | orchestrator | | d4dbde675c6440eeb5d99a362c87829f | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-18 03:52:20.254758 | orchestrator | | dbac6137ad454f7c94186fc753d3368e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-18 03:52:20.254764 | orchestrator | | e72e87b8faf24f20b244d1a84521cd7d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-18 03:52:20.254771 | orchestrator | | e7e90b5ef3884b8ca0d0c30bd4eda4e4 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-18 03:52:20.254778 | orchestrator | | fd396522d5ba4cbba9f39fb356a225ff | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-18 03:52:20.254784 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-18 03:52:20.560253 | orchestrator | 2026-03-18 03:52:20.560379 | orchestrator | # Cinder 2026-03-18 03:52:20.560406 | orchestrator | 2026-03-18 03:52:20.560424 | orchestrator | + echo 2026-03-18 03:52:20.560441 | orchestrator | + echo '# Cinder' 2026-03-18 03:52:20.560457 | orchestrator | + echo 2026-03-18 03:52:20.560475 | orchestrator | + openstack volume service list 2026-03-18 03:52:23.210462 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:23.210577 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-18 03:52:23.210596 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:23.210608 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-18T03:52:14.000000 | 2026-03-18 03:52:23.210620 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-18T03:52:15.000000 | 2026-03-18 03:52:23.210639 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-18T03:52:15.000000 | 2026-03-18 03:52:23.210658 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-18T03:52:14.000000 | 2026-03-18 03:52:23.210677 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-18T03:52:21.000000 | 2026-03-18 03:52:23.210697 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-18T03:52:22.000000 | 2026-03-18 03:52:23.210708 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-18T03:52:19.000000 | 2026-03-18 03:52:23.210746 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-18T03:52:22.000000 | 2026-03-18 03:52:23.210758 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-18T03:52:22.000000 | 2026-03-18 03:52:23.210769 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:23.461021 | orchestrator | 2026-03-18 03:52:23.461115 | orchestrator | # Neutron 2026-03-18 03:52:23.461130 | orchestrator | 2026-03-18 03:52:23.461141 | orchestrator | + echo 2026-03-18 03:52:23.461153 | orchestrator | + echo '# Neutron' 2026-03-18 03:52:23.461166 | orchestrator | + echo 2026-03-18 03:52:23.461177 | orchestrator | + openstack network agent list 2026-03-18 03:52:26.216221 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-18 03:52:26.217382 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-18 03:52:26.217427 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-18 03:52:26.217439 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217451 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217480 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217492 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217503 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217513 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-18 03:52:26.217524 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-18 03:52:26.217535 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-18 03:52:26.217545 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-18 03:52:26.217556 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-18 03:52:26.486554 | orchestrator | + openstack network service provider list 2026-03-18 03:52:29.228361 | orchestrator | +---------------+------+---------+ 2026-03-18 03:52:29.228483 | orchestrator | | Service Type | Name | Default | 2026-03-18 03:52:29.228504 | orchestrator | +---------------+------+---------+ 2026-03-18 03:52:29.228521 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-18 03:52:29.228537 | orchestrator | +---------------+------+---------+ 2026-03-18 03:52:29.497497 | orchestrator | 2026-03-18 03:52:29.497596 | orchestrator | # Nova 2026-03-18 03:52:29.497612 | orchestrator | 2026-03-18 03:52:29.497625 | orchestrator | + echo 2026-03-18 03:52:29.497638 | orchestrator | + echo '# Nova' 2026-03-18 03:52:29.497651 | orchestrator | + echo 2026-03-18 03:52:29.497663 | orchestrator | + openstack compute service list 2026-03-18 03:52:32.273203 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:32.273301 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-18 03:52:32.273337 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:32.273345 | orchestrator | | dadce6a8-e653-4732-8af0-47b3355ec3be | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-18T03:52:30.000000 | 2026-03-18 03:52:32.273355 | orchestrator | | 662f28a1-7a1d-4c52-9182-a4c0fe361e82 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-18T03:52:26.000000 | 2026-03-18 03:52:32.273364 | orchestrator | | 34a11834-941e-4115-a280-f8b472d4ee3b | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-18T03:52:27.000000 | 2026-03-18 03:52:32.273372 | orchestrator | | 46371faf-5feb-4162-9acb-bc69b1f2dd5e | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-18T03:52:29.000000 | 2026-03-18 03:52:32.273381 | orchestrator | | 4d95d131-9c99-4b46-a524-f3ae22e31c61 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-18T03:52:31.000000 | 2026-03-18 03:52:32.273389 | orchestrator | | 3418a00a-f2ad-46c4-8c46-5b34e1dbfaed | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-18T03:52:31.000000 | 2026-03-18 03:52:32.273397 | orchestrator | | 8ac23a10-ee85-445b-959b-07a6ce64e663 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-18T03:52:31.000000 | 2026-03-18 03:52:32.273406 | orchestrator | | 91762587-0ce1-4a98-b3e5-7400841e62d4 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-18T03:52:32.000000 | 2026-03-18 03:52:32.273414 | orchestrator | | b8edffce-58b5-4191-bf2e-d7530b413fbb | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-18T03:52:22.000000 | 2026-03-18 03:52:32.273423 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-18 03:52:32.547973 | orchestrator | + openstack hypervisor list 2026-03-18 03:52:35.217972 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-18 03:52:35.218177 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-18 03:52:35.218204 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-18 03:52:35.218216 | orchestrator | | aaba8f5d-767f-4abe-9e11-4cf17935d84d | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-18 03:52:35.218228 | orchestrator | | 0e37f27a-b9b3-46f3-9a74-5876de83d6a8 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-18 03:52:35.218239 | orchestrator | | 83e03c3b-3a53-4b2d-b0f7-f83e28687ec8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-18 03:52:35.218250 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-18 03:52:35.515500 | orchestrator | 2026-03-18 03:52:35.515583 | orchestrator | # Run OpenStack test play 2026-03-18 03:52:35.515595 | orchestrator | 2026-03-18 03:52:35.515607 | orchestrator | + echo 2026-03-18 03:52:35.515615 | orchestrator | + echo '# Run OpenStack test play' 2026-03-18 03:52:35.515622 | orchestrator | + echo 2026-03-18 03:52:35.515629 | orchestrator | + osism apply --environment openstack test 2026-03-18 03:52:37.450353 | orchestrator | 2026-03-18 03:52:37 | INFO  | Trying to run play test in environment openstack 2026-03-18 03:52:47.550806 | orchestrator | 2026-03-18 03:52:47 | INFO  | Task 5e15988c-3f94-4b8a-8cbd-0e37e9b97ea6 (test) was prepared for execution. 2026-03-18 03:52:47.550970 | orchestrator | 2026-03-18 03:52:47 | INFO  | It takes a moment until task 5e15988c-3f94-4b8a-8cbd-0e37e9b97ea6 (test) has been started and output is visible here. 2026-03-18 03:55:32.773965 | orchestrator | 2026-03-18 03:55:32.774136 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-18 03:55:32.774191 | orchestrator | 2026-03-18 03:55:32.774204 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-18 03:55:32.774217 | orchestrator | Wednesday 18 March 2026 03:52:51 +0000 (0:00:00.071) 0:00:00.071 ******* 2026-03-18 03:55:32.774228 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774241 | orchestrator | 2026-03-18 03:55:32.774277 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-18 03:55:32.774289 | orchestrator | Wednesday 18 March 2026 03:52:55 +0000 (0:00:03.622) 0:00:03.693 ******* 2026-03-18 03:55:32.774300 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774311 | orchestrator | 2026-03-18 03:55:32.774322 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-18 03:55:32.774333 | orchestrator | Wednesday 18 March 2026 03:52:59 +0000 (0:00:04.216) 0:00:07.910 ******* 2026-03-18 03:55:32.774343 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774354 | orchestrator | 2026-03-18 03:55:32.774365 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-18 03:55:32.774376 | orchestrator | Wednesday 18 March 2026 03:53:06 +0000 (0:00:06.798) 0:00:14.709 ******* 2026-03-18 03:55:32.774386 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774397 | orchestrator | 2026-03-18 03:55:32.774408 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-18 03:55:32.774419 | orchestrator | Wednesday 18 March 2026 03:53:10 +0000 (0:00:04.135) 0:00:18.844 ******* 2026-03-18 03:55:32.774429 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774440 | orchestrator | 2026-03-18 03:55:32.774451 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-18 03:55:32.774462 | orchestrator | Wednesday 18 March 2026 03:53:14 +0000 (0:00:04.160) 0:00:23.004 ******* 2026-03-18 03:55:32.774473 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-18 03:55:32.774485 | orchestrator | changed: [localhost] => (item=member) 2026-03-18 03:55:32.774497 | orchestrator | changed: [localhost] => (item=creator) 2026-03-18 03:55:32.774508 | orchestrator | 2026-03-18 03:55:32.774519 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-18 03:55:32.774529 | orchestrator | Wednesday 18 March 2026 03:53:26 +0000 (0:00:11.543) 0:00:34.547 ******* 2026-03-18 03:55:32.774540 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774551 | orchestrator | 2026-03-18 03:55:32.774579 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-18 03:55:32.774590 | orchestrator | Wednesday 18 March 2026 03:53:30 +0000 (0:00:04.432) 0:00:38.980 ******* 2026-03-18 03:55:32.774601 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774626 | orchestrator | 2026-03-18 03:55:32.774638 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-18 03:55:32.774648 | orchestrator | Wednesday 18 March 2026 03:53:35 +0000 (0:00:04.701) 0:00:43.681 ******* 2026-03-18 03:55:32.774659 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774670 | orchestrator | 2026-03-18 03:55:32.774681 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-18 03:55:32.774737 | orchestrator | Wednesday 18 March 2026 03:53:39 +0000 (0:00:04.249) 0:00:47.930 ******* 2026-03-18 03:55:32.774748 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774793 | orchestrator | 2026-03-18 03:55:32.774806 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-18 03:55:32.774817 | orchestrator | Wednesday 18 March 2026 03:53:43 +0000 (0:00:04.043) 0:00:51.974 ******* 2026-03-18 03:55:32.774827 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774838 | orchestrator | 2026-03-18 03:55:32.774849 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-18 03:55:32.774860 | orchestrator | Wednesday 18 March 2026 03:53:47 +0000 (0:00:04.152) 0:00:56.126 ******* 2026-03-18 03:55:32.774871 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774881 | orchestrator | 2026-03-18 03:55:32.774892 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-18 03:55:32.774903 | orchestrator | Wednesday 18 March 2026 03:53:51 +0000 (0:00:03.918) 0:01:00.045 ******* 2026-03-18 03:55:32.774914 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774925 | orchestrator | 2026-03-18 03:55:32.774936 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-18 03:55:32.774947 | orchestrator | Wednesday 18 March 2026 03:53:56 +0000 (0:00:04.543) 0:01:04.588 ******* 2026-03-18 03:55:32.774968 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.774979 | orchestrator | 2026-03-18 03:55:32.774990 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-18 03:55:32.775001 | orchestrator | Wednesday 18 March 2026 03:54:01 +0000 (0:00:05.226) 0:01:09.815 ******* 2026-03-18 03:55:32.775011 | orchestrator | changed: [localhost] 2026-03-18 03:55:32.775022 | orchestrator | 2026-03-18 03:55:32.775033 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-18 03:55:32.775044 | orchestrator | 2026-03-18 03:55:32.775055 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-18 03:55:32.775066 | orchestrator | Wednesday 18 March 2026 03:54:12 +0000 (0:00:10.632) 0:01:20.447 ******* 2026-03-18 03:55:32.775077 | orchestrator | ok: [localhost] 2026-03-18 03:55:32.775089 | orchestrator | 2026-03-18 03:55:32.775100 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-18 03:55:32.775111 | orchestrator | Wednesday 18 March 2026 03:54:15 +0000 (0:00:03.523) 0:01:23.970 ******* 2026-03-18 03:55:32.775121 | orchestrator | skipping: [localhost] 2026-03-18 03:55:32.775132 | orchestrator | 2026-03-18 03:55:32.775149 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-18 03:55:32.775161 | orchestrator | Wednesday 18 March 2026 03:54:15 +0000 (0:00:00.044) 0:01:24.015 ******* 2026-03-18 03:55:32.775171 | orchestrator | skipping: [localhost] 2026-03-18 03:55:32.775182 | orchestrator | 2026-03-18 03:55:32.775193 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-18 03:55:32.775204 | orchestrator | Wednesday 18 March 2026 03:54:15 +0000 (0:00:00.058) 0:01:24.074 ******* 2026-03-18 03:55:32.775215 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-18 03:55:32.775227 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-18 03:55:32.775259 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-18 03:55:32.775270 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-18 03:55:32.775281 | orchestrator | skipping: [localhost] => (item=test)  2026-03-18 03:55:32.775293 | orchestrator | skipping: [localhost] 2026-03-18 03:55:32.775303 | orchestrator | 2026-03-18 03:55:32.775315 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-18 03:55:32.775325 | orchestrator | Wednesday 18 March 2026 03:54:16 +0000 (0:00:00.168) 0:01:24.243 ******* 2026-03-18 03:55:32.775336 | orchestrator | skipping: [localhost] 2026-03-18 03:55:32.775347 | orchestrator | 2026-03-18 03:55:32.775358 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-18 03:55:32.775369 | orchestrator | Wednesday 18 March 2026 03:54:16 +0000 (0:00:00.152) 0:01:24.395 ******* 2026-03-18 03:55:32.775380 | orchestrator | changed: [localhost] => (item=test) 2026-03-18 03:55:32.775391 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-18 03:55:32.775402 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-18 03:55:32.775413 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-18 03:55:32.775424 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-18 03:55:32.775434 | orchestrator | 2026-03-18 03:55:32.775445 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-18 03:55:32.775456 | orchestrator | Wednesday 18 March 2026 03:54:21 +0000 (0:00:04.850) 0:01:29.246 ******* 2026-03-18 03:55:32.775467 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-18 03:55:32.775480 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-18 03:55:32.775491 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-18 03:55:32.775502 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-18 03:55:32.775512 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-03-18 03:55:32.775524 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j490695722819.3772', 'results_file': '/ansible/.ansible_async/j490695722819.3772', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775545 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j28278507865.3797', 'results_file': '/ansible/.ansible_async/j28278507865.3797', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775557 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j655858529345.3822', 'results_file': '/ansible/.ansible_async/j655858529345.3822', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775568 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j938495166664.3847', 'results_file': '/ansible/.ansible_async/j938495166664.3847', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775579 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j531116146689.3872', 'results_file': '/ansible/.ansible_async/j531116146689.3872', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775590 | orchestrator | 2026-03-18 03:55:32.775602 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-18 03:55:32.775613 | orchestrator | Wednesday 18 March 2026 03:55:18 +0000 (0:00:57.452) 0:02:26.698 ******* 2026-03-18 03:55:32.775624 | orchestrator | changed: [localhost] => (item=test) 2026-03-18 03:55:32.775635 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-18 03:55:32.775646 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-18 03:55:32.775657 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-18 03:55:32.775667 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-18 03:55:32.775678 | orchestrator | 2026-03-18 03:55:32.775689 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-18 03:55:32.775700 | orchestrator | Wednesday 18 March 2026 03:55:23 +0000 (0:00:04.568) 0:02:31.267 ******* 2026-03-18 03:55:32.775711 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-18 03:55:32.775723 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j958313713192.3983', 'results_file': '/ansible/.ansible_async/j958313713192.3983', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775739 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j170458659974.4008', 'results_file': '/ansible/.ansible_async/j170458659974.4008', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775751 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j953712001950.4033', 'results_file': '/ansible/.ansible_async/j953712001950.4033', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-18 03:55:32.775789 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j746634480796.4058', 'results_file': '/ansible/.ansible_async/j746634480796.4058', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.198730 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j725457113788.4091', 'results_file': '/ansible/.ansible_async/j725457113788.4091', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.198898 | orchestrator | 2026-03-18 03:56:13.198915 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-18 03:56:13.198928 | orchestrator | Wednesday 18 March 2026 03:55:32 +0000 (0:00:09.639) 0:02:40.906 ******* 2026-03-18 03:56:13.198940 | orchestrator | changed: [localhost] => (item=test) 2026-03-18 03:56:13.198953 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-18 03:56:13.198990 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-18 03:56:13.199001 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-18 03:56:13.199012 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-18 03:56:13.199023 | orchestrator | 2026-03-18 03:56:13.199034 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-18 03:56:13.199045 | orchestrator | Wednesday 18 March 2026 03:55:37 +0000 (0:00:04.672) 0:02:45.578 ******* 2026-03-18 03:56:13.199056 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-18 03:56:13.199069 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j582847537280.4160', 'results_file': '/ansible/.ansible_async/j582847537280.4160', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.199081 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j720879101087.4185', 'results_file': '/ansible/.ansible_async/j720879101087.4185', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.199092 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j871037322134.4211', 'results_file': '/ansible/.ansible_async/j871037322134.4211', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.199102 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j256821293997.4237', 'results_file': '/ansible/.ansible_async/j256821293997.4237', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.199114 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j142782889422.4263', 'results_file': '/ansible/.ansible_async/j142782889422.4263', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-18 03:56:13.199125 | orchestrator | 2026-03-18 03:56:13.199136 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-18 03:56:13.199148 | orchestrator | Wednesday 18 March 2026 03:55:47 +0000 (0:00:09.660) 0:02:55.239 ******* 2026-03-18 03:56:13.199160 | orchestrator | changed: [localhost] 2026-03-18 03:56:13.199177 | orchestrator | 2026-03-18 03:56:13.199194 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-18 03:56:13.199210 | orchestrator | Wednesday 18 March 2026 03:55:53 +0000 (0:00:06.392) 0:03:01.632 ******* 2026-03-18 03:56:13.199223 | orchestrator | changed: [localhost] 2026-03-18 03:56:13.199236 | orchestrator | 2026-03-18 03:56:13.199249 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-18 03:56:13.199263 | orchestrator | Wednesday 18 March 2026 03:56:06 +0000 (0:00:13.488) 0:03:15.121 ******* 2026-03-18 03:56:13.199278 | orchestrator | ok: [localhost] 2026-03-18 03:56:13.199292 | orchestrator | 2026-03-18 03:56:13.199307 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-18 03:56:13.199321 | orchestrator | Wednesday 18 March 2026 03:56:12 +0000 (0:00:05.822) 0:03:20.943 ******* 2026-03-18 03:56:13.199335 | orchestrator | ok: [localhost] => { 2026-03-18 03:56:13.199350 | orchestrator |  "msg": "192.168.112.190" 2026-03-18 03:56:13.199383 | orchestrator | } 2026-03-18 03:56:13.199397 | orchestrator | 2026-03-18 03:56:13.199413 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 03:56:13.199429 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 03:56:13.199445 | orchestrator | 2026-03-18 03:56:13.199460 | orchestrator | 2026-03-18 03:56:13.199475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 03:56:13.199491 | orchestrator | Wednesday 18 March 2026 03:56:12 +0000 (0:00:00.048) 0:03:20.992 ******* 2026-03-18 03:56:13.199522 | orchestrator | =============================================================================== 2026-03-18 03:56:13.199563 | orchestrator | Wait for instance creation to complete --------------------------------- 57.45s 2026-03-18 03:56:13.199605 | orchestrator | Attach test volume ----------------------------------------------------- 13.49s 2026-03-18 03:56:13.199620 | orchestrator | Add member roles to user test ------------------------------------------ 11.54s 2026-03-18 03:56:13.199631 | orchestrator | Create test router ----------------------------------------------------- 10.63s 2026-03-18 03:56:13.199639 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.66s 2026-03-18 03:56:13.199647 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.64s 2026-03-18 03:56:13.199655 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.80s 2026-03-18 03:56:13.199681 | orchestrator | Create test volume ------------------------------------------------------ 6.39s 2026-03-18 03:56:13.199689 | orchestrator | Create floating ip address ---------------------------------------------- 5.82s 2026-03-18 03:56:13.199697 | orchestrator | Create test subnet ------------------------------------------------------ 5.23s 2026-03-18 03:56:13.199705 | orchestrator | Create test instances --------------------------------------------------- 4.85s 2026-03-18 03:56:13.199713 | orchestrator | Create ssh security group ----------------------------------------------- 4.70s 2026-03-18 03:56:13.199721 | orchestrator | Add tag to instances ---------------------------------------------------- 4.67s 2026-03-18 03:56:13.199834 | orchestrator | Add metadata to instances ----------------------------------------------- 4.57s 2026-03-18 03:56:13.199845 | orchestrator | Create test network ----------------------------------------------------- 4.54s 2026-03-18 03:56:13.199853 | orchestrator | Create test server group ------------------------------------------------ 4.43s 2026-03-18 03:56:13.199861 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.25s 2026-03-18 03:56:13.199869 | orchestrator | Create test-admin user -------------------------------------------------- 4.22s 2026-03-18 03:56:13.199877 | orchestrator | Create test user -------------------------------------------------------- 4.16s 2026-03-18 03:56:13.199885 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.15s 2026-03-18 03:56:13.597318 | orchestrator | + server_list 2026-03-18 03:56:13.597409 | orchestrator | + openstack --os-cloud test server list 2026-03-18 03:56:17.203475 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-18 03:56:17.203575 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-18 03:56:17.203597 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-18 03:56:17.203614 | orchestrator | | 7687e20c-c0b1-4325-98c3-3bcb0f25cc41 | test-3 | ACTIVE | test=192.168.112.143, 192.168.200.66 | N/A (booted from volume) | SCS-1L-1 | 2026-03-18 03:56:17.203630 | orchestrator | | 278c75b6-4445-465e-8eb0-f8f9abed38c0 | test-4 | ACTIVE | test=192.168.112.154, 192.168.200.232 | N/A (booted from volume) | SCS-1L-1 | 2026-03-18 03:56:17.203647 | orchestrator | | 8efd2cb0-0886-4dc7-947a-0db04f25f9ce | test-1 | ACTIVE | test=192.168.112.194, 192.168.200.168 | N/A (booted from volume) | SCS-1L-1 | 2026-03-18 03:56:17.203663 | orchestrator | | e3394532-49ba-4506-adfb-daed7c614d7c | test-2 | ACTIVE | test=192.168.112.114, 192.168.200.183 | N/A (booted from volume) | SCS-1L-1 | 2026-03-18 03:56:17.203680 | orchestrator | | fd35595e-fee2-4eca-9265-6c2dbe4ec78c | test | ACTIVE | test=192.168.112.190, 192.168.200.13 | N/A (booted from volume) | SCS-1L-1 | 2026-03-18 03:56:17.203690 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-18 03:56:17.476460 | orchestrator | + openstack --os-cloud test server show test 2026-03-18 03:56:20.668107 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:20.668246 | orchestrator | | Field | Value | 2026-03-18 03:56:20.668267 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:20.668277 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-18 03:56:20.668287 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-18 03:56:20.668296 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-18 03:56:20.668305 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-18 03:56:20.668314 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-18 03:56:20.668323 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-18 03:56:20.668348 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-18 03:56:20.668364 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-18 03:56:20.668373 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-18 03:56:20.668382 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-18 03:56:20.668391 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-18 03:56:20.668400 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-18 03:56:20.668434 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-18 03:56:20.668444 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-18 03:56:20.668453 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-18 03:56:20.668463 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-18T03:54:55.000000 | 2026-03-18 03:56:20.668488 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-18 03:56:20.668498 | orchestrator | | accessIPv4 | | 2026-03-18 03:56:20.668514 | orchestrator | | accessIPv6 | | 2026-03-18 03:56:20.668528 | orchestrator | | addresses | test=192.168.112.190, 192.168.200.13 | 2026-03-18 03:56:20.668537 | orchestrator | | config_drive | | 2026-03-18 03:56:20.668546 | orchestrator | | created | 2026-03-18T03:54:26Z | 2026-03-18 03:56:20.668555 | orchestrator | | description | None | 2026-03-18 03:56:20.668564 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-18 03:56:20.668573 | orchestrator | | hostId | f5aa549de82b88b4ba983c7e17b0118c4036e02cb8b1e99f75fd176b | 2026-03-18 03:56:20.668582 | orchestrator | | host_status | None | 2026-03-18 03:56:20.668603 | orchestrator | | id | fd35595e-fee2-4eca-9265-6c2dbe4ec78c | 2026-03-18 03:56:20.668613 | orchestrator | | image | N/A (booted from volume) | 2026-03-18 03:56:20.668622 | orchestrator | | key_name | test | 2026-03-18 03:56:20.668637 | orchestrator | | locked | False | 2026-03-18 03:56:20.668647 | orchestrator | | locked_reason | None | 2026-03-18 03:56:20.668658 | orchestrator | | name | test | 2026-03-18 03:56:20.668668 | orchestrator | | pinned_availability_zone | None | 2026-03-18 03:56:20.668678 | orchestrator | | progress | 0 | 2026-03-18 03:56:20.668688 | orchestrator | | project_id | 72dbcfcf13994c82b775b50cfd82b269 | 2026-03-18 03:56:20.668703 | orchestrator | | properties | hostname='test' | 2026-03-18 03:56:20.668720 | orchestrator | | security_groups | name='ssh' | 2026-03-18 03:56:20.668731 | orchestrator | | | name='icmp' | 2026-03-18 03:56:20.668766 | orchestrator | | server_groups | None | 2026-03-18 03:56:20.668786 | orchestrator | | status | ACTIVE | 2026-03-18 03:56:20.668797 | orchestrator | | tags | test | 2026-03-18 03:56:20.668807 | orchestrator | | trusted_image_certificates | None | 2026-03-18 03:56:20.668818 | orchestrator | | updated | 2026-03-18T03:55:24Z | 2026-03-18 03:56:20.668828 | orchestrator | | user_id | 351ac4e6e0474a7284ef66668892f3d2 | 2026-03-18 03:56:20.668844 | orchestrator | | volumes_attached | delete_on_termination='True', id='166f7b4b-7c5e-46ff-b291-ca44083f5a52' | 2026-03-18 03:56:20.668854 | orchestrator | | | delete_on_termination='False', id='4d1bd5db-fabf-479e-aff7-39629b993092' | 2026-03-18 03:56:20.672850 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:20.950999 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-18 03:56:24.349431 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:24.349559 | orchestrator | | Field | Value | 2026-03-18 03:56:24.349589 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:24.349602 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-18 03:56:24.349614 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-18 03:56:24.349626 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-18 03:56:24.349663 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-18 03:56:24.349686 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-18 03:56:24.349714 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-18 03:56:24.349786 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-18 03:56:24.349806 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-18 03:56:24.349835 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-18 03:56:24.349855 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-18 03:56:24.349875 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-18 03:56:24.349894 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-18 03:56:24.349911 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-18 03:56:24.349939 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-18 03:56:24.349966 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-18 03:56:24.349991 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-18T03:54:53.000000 | 2026-03-18 03:56:24.350116 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-18 03:56:24.350146 | orchestrator | | accessIPv4 | | 2026-03-18 03:56:24.350162 | orchestrator | | accessIPv6 | | 2026-03-18 03:56:24.350176 | orchestrator | | addresses | test=192.168.112.194, 192.168.200.168 | 2026-03-18 03:56:24.350190 | orchestrator | | config_drive | | 2026-03-18 03:56:24.350203 | orchestrator | | created | 2026-03-18T03:54:26Z | 2026-03-18 03:56:24.350228 | orchestrator | | description | None | 2026-03-18 03:56:24.350241 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-18 03:56:24.350254 | orchestrator | | hostId | 9cd30ae8213756f0ab439342c44d6963989a0eaa9160e052bd501c70 | 2026-03-18 03:56:24.350267 | orchestrator | | host_status | None | 2026-03-18 03:56:24.350289 | orchestrator | | id | 8efd2cb0-0886-4dc7-947a-0db04f25f9ce | 2026-03-18 03:56:24.350301 | orchestrator | | image | N/A (booted from volume) | 2026-03-18 03:56:24.350326 | orchestrator | | key_name | test | 2026-03-18 03:56:24.350338 | orchestrator | | locked | False | 2026-03-18 03:56:24.350349 | orchestrator | | locked_reason | None | 2026-03-18 03:56:24.350367 | orchestrator | | name | test-1 | 2026-03-18 03:56:24.350378 | orchestrator | | pinned_availability_zone | None | 2026-03-18 03:56:24.350390 | orchestrator | | progress | 0 | 2026-03-18 03:56:24.350401 | orchestrator | | project_id | 72dbcfcf13994c82b775b50cfd82b269 | 2026-03-18 03:56:24.350412 | orchestrator | | properties | hostname='test-1' | 2026-03-18 03:56:24.350442 | orchestrator | | security_groups | name='ssh' | 2026-03-18 03:56:24.350454 | orchestrator | | | name='icmp' | 2026-03-18 03:56:24.350470 | orchestrator | | server_groups | None | 2026-03-18 03:56:24.350481 | orchestrator | | status | ACTIVE | 2026-03-18 03:56:24.350500 | orchestrator | | tags | test | 2026-03-18 03:56:24.350511 | orchestrator | | trusted_image_certificates | None | 2026-03-18 03:56:24.350523 | orchestrator | | updated | 2026-03-18T03:55:25Z | 2026-03-18 03:56:24.350534 | orchestrator | | user_id | 351ac4e6e0474a7284ef66668892f3d2 | 2026-03-18 03:56:24.350545 | orchestrator | | volumes_attached | delete_on_termination='True', id='2e3540be-2acf-4c5a-9f44-25c84334b57f' | 2026-03-18 03:56:24.354044 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:24.629364 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-18 03:56:27.832589 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:27.832674 | orchestrator | | Field | Value | 2026-03-18 03:56:27.832698 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:27.832722 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-18 03:56:27.832730 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-18 03:56:27.832765 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-18 03:56:27.832771 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-18 03:56:27.832778 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-18 03:56:27.832784 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-18 03:56:27.832803 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-18 03:56:27.832810 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-18 03:56:27.832817 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-18 03:56:27.832828 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-18 03:56:27.832840 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-18 03:56:27.832846 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-18 03:56:27.832853 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-18 03:56:27.832859 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-18 03:56:27.832865 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-18 03:56:27.832872 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-18T03:54:53.000000 | 2026-03-18 03:56:27.832883 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-18 03:56:27.832890 | orchestrator | | accessIPv4 | | 2026-03-18 03:56:27.832896 | orchestrator | | accessIPv6 | | 2026-03-18 03:56:27.832915 | orchestrator | | addresses | test=192.168.112.114, 192.168.200.183 | 2026-03-18 03:56:27.832922 | orchestrator | | config_drive | | 2026-03-18 03:56:27.832928 | orchestrator | | created | 2026-03-18T03:54:26Z | 2026-03-18 03:56:27.832935 | orchestrator | | description | None | 2026-03-18 03:56:27.832941 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-18 03:56:27.832947 | orchestrator | | hostId | 9cd30ae8213756f0ab439342c44d6963989a0eaa9160e052bd501c70 | 2026-03-18 03:56:27.832954 | orchestrator | | host_status | None | 2026-03-18 03:56:27.832964 | orchestrator | | id | e3394532-49ba-4506-adfb-daed7c614d7c | 2026-03-18 03:56:27.832971 | orchestrator | | image | N/A (booted from volume) | 2026-03-18 03:56:27.832983 | orchestrator | | key_name | test | 2026-03-18 03:56:27.832993 | orchestrator | | locked | False | 2026-03-18 03:56:27.832999 | orchestrator | | locked_reason | None | 2026-03-18 03:56:27.833006 | orchestrator | | name | test-2 | 2026-03-18 03:56:27.833012 | orchestrator | | pinned_availability_zone | None | 2026-03-18 03:56:27.833018 | orchestrator | | progress | 0 | 2026-03-18 03:56:27.833025 | orchestrator | | project_id | 72dbcfcf13994c82b775b50cfd82b269 | 2026-03-18 03:56:27.833031 | orchestrator | | properties | hostname='test-2' | 2026-03-18 03:56:27.833042 | orchestrator | | security_groups | name='ssh' | 2026-03-18 03:56:27.833056 | orchestrator | | | name='icmp' | 2026-03-18 03:56:27.833063 | orchestrator | | server_groups | None | 2026-03-18 03:56:27.833069 | orchestrator | | status | ACTIVE | 2026-03-18 03:56:27.833076 | orchestrator | | tags | test | 2026-03-18 03:56:27.833082 | orchestrator | | trusted_image_certificates | None | 2026-03-18 03:56:27.833088 | orchestrator | | updated | 2026-03-18T03:55:25Z | 2026-03-18 03:56:27.833095 | orchestrator | | user_id | 351ac4e6e0474a7284ef66668892f3d2 | 2026-03-18 03:56:27.833101 | orchestrator | | volumes_attached | delete_on_termination='True', id='57423007-0db2-4647-8330-2843abff4923' | 2026-03-18 03:56:27.838389 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:28.112959 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-18 03:56:31.156637 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:31.156785 | orchestrator | | Field | Value | 2026-03-18 03:56:31.156804 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:31.156816 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-18 03:56:31.156828 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-18 03:56:31.156840 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-18 03:56:31.156852 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-18 03:56:31.156863 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-18 03:56:31.156875 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-18 03:56:31.156914 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-18 03:56:31.156948 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-18 03:56:31.156959 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-18 03:56:31.156971 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-18 03:56:31.156986 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-18 03:56:31.157005 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-18 03:56:31.157024 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-18 03:56:31.157044 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-18 03:56:31.157064 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-18 03:56:31.157083 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-18T03:54:57.000000 | 2026-03-18 03:56:31.157132 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-18 03:56:31.157155 | orchestrator | | accessIPv4 | | 2026-03-18 03:56:31.157175 | orchestrator | | accessIPv6 | | 2026-03-18 03:56:31.157195 | orchestrator | | addresses | test=192.168.112.143, 192.168.200.66 | 2026-03-18 03:56:31.157215 | orchestrator | | config_drive | | 2026-03-18 03:56:31.157227 | orchestrator | | created | 2026-03-18T03:54:30Z | 2026-03-18 03:56:31.157238 | orchestrator | | description | None | 2026-03-18 03:56:31.157249 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-18 03:56:31.157261 | orchestrator | | hostId | d0f684aad0065ed6b425c2936fd4a7daaa6251845001541d29051a9c | 2026-03-18 03:56:31.157280 | orchestrator | | host_status | None | 2026-03-18 03:56:31.157316 | orchestrator | | id | 7687e20c-c0b1-4325-98c3-3bcb0f25cc41 | 2026-03-18 03:56:31.157374 | orchestrator | | image | N/A (booted from volume) | 2026-03-18 03:56:31.157395 | orchestrator | | key_name | test | 2026-03-18 03:56:31.157412 | orchestrator | | locked | False | 2026-03-18 03:56:31.157423 | orchestrator | | locked_reason | None | 2026-03-18 03:56:31.157434 | orchestrator | | name | test-3 | 2026-03-18 03:56:31.157452 | orchestrator | | pinned_availability_zone | None | 2026-03-18 03:56:31.157472 | orchestrator | | progress | 0 | 2026-03-18 03:56:31.157503 | orchestrator | | project_id | 72dbcfcf13994c82b775b50cfd82b269 | 2026-03-18 03:56:31.157525 | orchestrator | | properties | hostname='test-3' | 2026-03-18 03:56:31.157563 | orchestrator | | security_groups | name='ssh' | 2026-03-18 03:56:31.157582 | orchestrator | | | name='icmp' | 2026-03-18 03:56:31.157593 | orchestrator | | server_groups | None | 2026-03-18 03:56:31.157604 | orchestrator | | status | ACTIVE | 2026-03-18 03:56:31.157615 | orchestrator | | tags | test | 2026-03-18 03:56:31.157626 | orchestrator | | trusted_image_certificates | None | 2026-03-18 03:56:31.157637 | orchestrator | | updated | 2026-03-18T03:55:26Z | 2026-03-18 03:56:31.157648 | orchestrator | | user_id | 351ac4e6e0474a7284ef66668892f3d2 | 2026-03-18 03:56:31.157667 | orchestrator | | volumes_attached | delete_on_termination='True', id='6aed72ae-d698-44fc-b974-c614b326a61d' | 2026-03-18 03:56:31.160543 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:31.429210 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-18 03:56:34.530576 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:34.532073 | orchestrator | | Field | Value | 2026-03-18 03:56:34.532115 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:34.532129 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-18 03:56:34.532142 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-18 03:56:34.532153 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-18 03:56:34.532164 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-18 03:56:34.532197 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-18 03:56:34.532209 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-18 03:56:34.532244 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-18 03:56:34.532299 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-18 03:56:34.532312 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-18 03:56:34.532323 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-18 03:56:34.532334 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-18 03:56:34.532345 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-18 03:56:34.532356 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-18 03:56:34.532375 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-18 03:56:34.532386 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-18 03:56:34.532397 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-18T03:54:53.000000 | 2026-03-18 03:56:34.532417 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-18 03:56:34.532429 | orchestrator | | accessIPv4 | | 2026-03-18 03:56:34.532440 | orchestrator | | accessIPv6 | | 2026-03-18 03:56:34.532452 | orchestrator | | addresses | test=192.168.112.154, 192.168.200.232 | 2026-03-18 03:56:34.532463 | orchestrator | | config_drive | | 2026-03-18 03:56:34.532474 | orchestrator | | created | 2026-03-18T03:54:29Z | 2026-03-18 03:56:34.532491 | orchestrator | | description | None | 2026-03-18 03:56:34.532503 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-18 03:56:34.532582 | orchestrator | | hostId | f5aa549de82b88b4ba983c7e17b0118c4036e02cb8b1e99f75fd176b | 2026-03-18 03:56:34.532608 | orchestrator | | host_status | None | 2026-03-18 03:56:34.532640 | orchestrator | | id | 278c75b6-4445-465e-8eb0-f8f9abed38c0 | 2026-03-18 03:56:34.532668 | orchestrator | | image | N/A (booted from volume) | 2026-03-18 03:56:34.532686 | orchestrator | | key_name | test | 2026-03-18 03:56:34.532698 | orchestrator | | locked | False | 2026-03-18 03:56:34.532709 | orchestrator | | locked_reason | None | 2026-03-18 03:56:34.532757 | orchestrator | | name | test-4 | 2026-03-18 03:56:34.532770 | orchestrator | | pinned_availability_zone | None | 2026-03-18 03:56:34.532781 | orchestrator | | progress | 0 | 2026-03-18 03:56:34.532792 | orchestrator | | project_id | 72dbcfcf13994c82b775b50cfd82b269 | 2026-03-18 03:56:34.532804 | orchestrator | | properties | hostname='test-4' | 2026-03-18 03:56:34.532828 | orchestrator | | security_groups | name='ssh' | 2026-03-18 03:56:34.532841 | orchestrator | | | name='icmp' | 2026-03-18 03:56:34.532853 | orchestrator | | server_groups | None | 2026-03-18 03:56:34.532864 | orchestrator | | status | ACTIVE | 2026-03-18 03:56:34.532875 | orchestrator | | tags | test | 2026-03-18 03:56:34.532894 | orchestrator | | trusted_image_certificates | None | 2026-03-18 03:56:34.532906 | orchestrator | | updated | 2026-03-18T03:55:27Z | 2026-03-18 03:56:34.532917 | orchestrator | | user_id | 351ac4e6e0474a7284ef66668892f3d2 | 2026-03-18 03:56:34.532928 | orchestrator | | volumes_attached | delete_on_termination='True', id='8bb1fe38-b266-4fd6-a368-bd6d64b94067' | 2026-03-18 03:56:34.536823 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-18 03:56:34.813228 | orchestrator | + server_ping 2026-03-18 03:56:34.814531 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-18 03:56:34.814651 | orchestrator | ++ tr -d '\r' 2026-03-18 03:56:37.783786 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-18 03:56:37.783860 | orchestrator | + ping -c3 192.168.112.194 2026-03-18 03:56:37.798977 | orchestrator | PING 192.168.112.194 (192.168.112.194) 56(84) bytes of data. 2026-03-18 03:56:37.799080 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=1 ttl=63 time=7.35 ms 2026-03-18 03:56:38.796281 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=2 ttl=63 time=2.78 ms 2026-03-18 03:56:39.797451 | orchestrator | 64 bytes from 192.168.112.194: icmp_seq=3 ttl=63 time=1.75 ms 2026-03-18 03:56:39.798402 | orchestrator | 2026-03-18 03:56:39.798439 | orchestrator | --- 192.168.112.194 ping statistics --- 2026-03-18 03:56:39.798451 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-18 03:56:39.798461 | orchestrator | rtt min/avg/max/mdev = 1.754/3.960/7.349/2.432 ms 2026-03-18 03:56:39.798484 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-18 03:56:39.798495 | orchestrator | + ping -c3 192.168.112.143 2026-03-18 03:56:39.808703 | orchestrator | PING 192.168.112.143 (192.168.112.143) 56(84) bytes of data. 2026-03-18 03:56:39.808820 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=1 ttl=63 time=8.03 ms 2026-03-18 03:56:40.804538 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=2 ttl=63 time=2.47 ms 2026-03-18 03:56:41.806454 | orchestrator | 64 bytes from 192.168.112.143: icmp_seq=3 ttl=63 time=1.91 ms 2026-03-18 03:56:41.806532 | orchestrator | 2026-03-18 03:56:41.806543 | orchestrator | --- 192.168.112.143 ping statistics --- 2026-03-18 03:56:41.806551 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-18 03:56:41.806558 | orchestrator | rtt min/avg/max/mdev = 1.906/4.134/8.027/2.762 ms 2026-03-18 03:56:41.806567 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-18 03:56:41.806574 | orchestrator | + ping -c3 192.168.112.190 2026-03-18 03:56:41.819717 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-03-18 03:56:41.819825 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=8.74 ms 2026-03-18 03:56:42.815622 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.37 ms 2026-03-18 03:56:43.817608 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.02 ms 2026-03-18 03:56:43.817714 | orchestrator | 2026-03-18 03:56:43.817952 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-03-18 03:56:43.817974 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-03-18 03:56:43.817986 | orchestrator | rtt min/avg/max/mdev = 2.018/4.376/8.743/3.091 ms 2026-03-18 03:56:43.818010 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-18 03:56:43.818081 | orchestrator | + ping -c3 192.168.112.154 2026-03-18 03:56:43.829696 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2026-03-18 03:56:43.829775 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=7.33 ms 2026-03-18 03:56:44.826674 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.36 ms 2026-03-18 03:56:45.828438 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.98 ms 2026-03-18 03:56:45.828550 | orchestrator | 2026-03-18 03:56:45.828567 | orchestrator | --- 192.168.112.154 ping statistics --- 2026-03-18 03:56:45.828580 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-18 03:56:45.828592 | orchestrator | rtt min/avg/max/mdev = 1.978/3.892/7.334/2.438 ms 2026-03-18 03:56:45.828603 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-18 03:56:45.828615 | orchestrator | + ping -c3 192.168.112.114 2026-03-18 03:56:45.842220 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-18 03:56:45.842293 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=8.64 ms 2026-03-18 03:56:46.837885 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.25 ms 2026-03-18 03:56:47.839139 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=2.18 ms 2026-03-18 03:56:47.839245 | orchestrator | 2026-03-18 03:56:47.839261 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-18 03:56:47.839274 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-18 03:56:47.839286 | orchestrator | rtt min/avg/max/mdev = 2.177/4.354/8.640/3.030 ms 2026-03-18 03:56:47.840279 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-18 03:56:48.083497 | orchestrator | ok: Runtime: 0:10:03.427181 2026-03-18 03:56:48.131838 | 2026-03-18 03:56:48.131989 | TASK [Run tempest] 2026-03-18 03:56:48.666149 | orchestrator | skipping: Conditional result was False 2026-03-18 03:56:48.685357 | 2026-03-18 03:56:48.685540 | TASK [Check prometheus alert status] 2026-03-18 03:56:49.225866 | orchestrator | skipping: Conditional result was False 2026-03-18 03:56:49.239165 | 2026-03-18 03:56:49.239311 | PLAY [Upgrade testbed] 2026-03-18 03:56:49.250633 | 2026-03-18 03:56:49.250756 | TASK [Print next ceph version] 2026-03-18 03:56:49.330159 | orchestrator | ok 2026-03-18 03:56:49.340458 | 2026-03-18 03:56:49.340600 | TASK [Print next openstack version] 2026-03-18 03:56:49.412264 | orchestrator | ok 2026-03-18 03:56:49.424356 | 2026-03-18 03:56:49.424487 | TASK [Print next manager version] 2026-03-18 03:56:49.505156 | orchestrator | ok 2026-03-18 03:56:49.517707 | 2026-03-18 03:56:49.517933 | TASK [Set cloud fact (Zuul deployment)] 2026-03-18 03:56:49.580703 | orchestrator | ok 2026-03-18 03:56:49.595604 | 2026-03-18 03:56:49.595784 | TASK [Set cloud fact (local deployment)] 2026-03-18 03:56:49.632286 | orchestrator | skipping: Conditional result was False 2026-03-18 03:56:49.645145 | 2026-03-18 03:56:49.645305 | TASK [Fetch manager address] 2026-03-18 03:56:49.924575 | orchestrator | ok 2026-03-18 03:56:49.934750 | 2026-03-18 03:56:49.934909 | TASK [Set manager_host address] 2026-03-18 03:56:50.016202 | orchestrator | ok 2026-03-18 03:56:50.027778 | 2026-03-18 03:56:50.027931 | TASK [Run upgrade] 2026-03-18 03:56:50.707878 | orchestrator | + set -e 2026-03-18 03:56:50.708096 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-18 03:56:50.708121 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-18 03:56:50.708141 | orchestrator | + CEPH_VERSION=reef 2026-03-18 03:56:50.708153 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-18 03:56:50.708164 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-18 03:56:50.708186 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-18 03:56:50.717219 | orchestrator | + set -e 2026-03-18 03:56:50.717294 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 03:56:50.717309 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 03:56:50.717320 | orchestrator | ++ INTERACTIVE=false 2026-03-18 03:56:50.717327 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 03:56:50.717339 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 03:56:50.719345 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-18 03:56:50.763172 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-18 03:56:50.763992 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-18 03:56:50.796551 | orchestrator | 2026-03-18 03:56:50.796640 | orchestrator | # UPGRADE MANAGER 2026-03-18 03:56:50.796655 | orchestrator | 2026-03-18 03:56:50.796664 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-18 03:56:50.796671 | orchestrator | + echo 2026-03-18 03:56:50.796678 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-18 03:56:50.796686 | orchestrator | + echo 2026-03-18 03:56:50.796693 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-18 03:56:50.796701 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-18 03:56:50.796707 | orchestrator | + CEPH_VERSION=reef 2026-03-18 03:56:50.796714 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-18 03:56:50.796721 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-18 03:56:50.796744 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-18 03:56:50.804087 | orchestrator | + set -e 2026-03-18 03:56:50.804209 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-18 03:56:50.804227 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:56:50.814233 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-18 03:56:50.814308 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:56:50.817523 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:56:50.820404 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-18 03:56:50.827922 | orchestrator | + set -e 2026-03-18 03:56:50.828008 | orchestrator | /opt/configuration ~ 2026-03-18 03:56:50.828024 | orchestrator | + pushd /opt/configuration 2026-03-18 03:56:50.828037 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 03:56:50.828052 | orchestrator | + source /opt/venv/bin/activate 2026-03-18 03:56:50.831976 | orchestrator | ++ deactivate nondestructive 2026-03-18 03:56:50.832049 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:50.832062 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:50.832075 | orchestrator | ++ hash -r 2026-03-18 03:56:50.832087 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:50.832099 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-18 03:56:50.832110 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-18 03:56:50.832122 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-18 03:56:50.832137 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-18 03:56:50.832149 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-18 03:56:50.832160 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-18 03:56:50.832172 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-18 03:56:50.832185 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:50.832198 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:50.832209 | orchestrator | ++ export PATH 2026-03-18 03:56:50.832221 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:50.832232 | orchestrator | ++ '[' -z '' ']' 2026-03-18 03:56:50.832243 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-18 03:56:50.832254 | orchestrator | ++ PS1='(venv) ' 2026-03-18 03:56:50.832265 | orchestrator | ++ export PS1 2026-03-18 03:56:50.832276 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-18 03:56:50.832288 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-18 03:56:50.832300 | orchestrator | ++ hash -r 2026-03-18 03:56:50.832316 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-18 03:56:52.124802 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-18 03:56:52.126663 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-18 03:56:52.128547 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-18 03:56:52.130669 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-18 03:56:52.132311 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-18 03:56:52.147403 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-18 03:56:52.149145 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-18 03:56:52.150951 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-18 03:56:52.153356 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-18 03:56:52.196657 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-18 03:56:52.199307 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-18 03:56:52.202071 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-18 03:56:52.204249 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-18 03:56:52.209885 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-18 03:56:52.450122 | orchestrator | ++ which gilt 2026-03-18 03:56:52.453124 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-18 03:56:52.453165 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-18 03:56:52.700165 | orchestrator | osism.cfg-generics: 2026-03-18 03:56:52.807839 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-18 03:56:52.808519 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-18 03:56:52.809457 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-18 03:56:52.809531 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-18 03:56:53.722115 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-18 03:56:53.737380 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-18 03:56:54.092363 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-18 03:56:54.143651 | orchestrator | ~ 2026-03-18 03:56:54.143825 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 03:56:54.143842 | orchestrator | + deactivate 2026-03-18 03:56:54.143854 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-18 03:56:54.143866 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:54.143875 | orchestrator | + export PATH 2026-03-18 03:56:54.143885 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-18 03:56:54.143894 | orchestrator | + '[' -n '' ']' 2026-03-18 03:56:54.143903 | orchestrator | + hash -r 2026-03-18 03:56:54.143912 | orchestrator | + '[' -n '' ']' 2026-03-18 03:56:54.143922 | orchestrator | + unset VIRTUAL_ENV 2026-03-18 03:56:54.143932 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-18 03:56:54.143941 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-18 03:56:54.143950 | orchestrator | + unset -f deactivate 2026-03-18 03:56:54.143960 | orchestrator | + popd 2026-03-18 03:56:54.145296 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-18 03:56:54.145345 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-18 03:56:54.149252 | orchestrator | + set -e 2026-03-18 03:56:54.149292 | orchestrator | + NAMESPACE=kolla/release 2026-03-18 03:56:54.149304 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-18 03:56:54.157582 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-18 03:56:54.162411 | orchestrator | /opt/configuration ~ 2026-03-18 03:56:54.162462 | orchestrator | + set -e 2026-03-18 03:56:54.162468 | orchestrator | + pushd /opt/configuration 2026-03-18 03:56:54.162472 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 03:56:54.162477 | orchestrator | + source /opt/venv/bin/activate 2026-03-18 03:56:54.162481 | orchestrator | ++ deactivate nondestructive 2026-03-18 03:56:54.162485 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:54.162489 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:54.162493 | orchestrator | ++ hash -r 2026-03-18 03:56:54.162496 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:54.162500 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-18 03:56:54.162504 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-18 03:56:54.162508 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-18 03:56:54.162512 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-18 03:56:54.162516 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-18 03:56:54.162571 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-18 03:56:54.162583 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-18 03:56:54.162590 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:54.162614 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:54.162622 | orchestrator | ++ export PATH 2026-03-18 03:56:54.162628 | orchestrator | ++ '[' -n '' ']' 2026-03-18 03:56:54.162739 | orchestrator | ++ '[' -z '' ']' 2026-03-18 03:56:54.162747 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-18 03:56:54.162751 | orchestrator | ++ PS1='(venv) ' 2026-03-18 03:56:54.162754 | orchestrator | ++ export PS1 2026-03-18 03:56:54.162759 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-18 03:56:54.162762 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-18 03:56:54.162766 | orchestrator | ++ hash -r 2026-03-18 03:56:54.163008 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-18 03:56:54.720476 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-18 03:56:54.722272 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-18 03:56:54.723918 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-18 03:56:54.725745 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-18 03:56:54.727163 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-18 03:56:54.747625 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-18 03:56:54.750395 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-18 03:56:54.753322 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-18 03:56:54.756446 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-18 03:56:54.795045 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-18 03:56:54.797003 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-18 03:56:54.799367 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-18 03:56:54.801273 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-18 03:56:54.807164 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-18 03:56:55.044213 | orchestrator | ++ which gilt 2026-03-18 03:56:55.045029 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-18 03:56:55.045065 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-18 03:56:55.230945 | orchestrator | osism.cfg-generics: 2026-03-18 03:56:55.279144 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-18 03:56:55.279315 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-18 03:56:55.279418 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-18 03:56:55.279439 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-18 03:56:55.907886 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-18 03:56:55.915772 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-18 03:56:56.456488 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-18 03:56:56.518525 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-18 03:56:56.518611 | orchestrator | + deactivate 2026-03-18 03:56:56.518649 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-18 03:56:56.518656 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-18 03:56:56.518661 | orchestrator | + export PATH 2026-03-18 03:56:56.518665 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-18 03:56:56.518670 | orchestrator | + '[' -n '' ']' 2026-03-18 03:56:56.518674 | orchestrator | + hash -r 2026-03-18 03:56:56.518679 | orchestrator | + '[' -n '' ']' 2026-03-18 03:56:56.518683 | orchestrator | + unset VIRTUAL_ENV 2026-03-18 03:56:56.518688 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-18 03:56:56.518693 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-18 03:56:56.518697 | orchestrator | + unset -f deactivate 2026-03-18 03:56:56.518753 | orchestrator | ~ 2026-03-18 03:56:56.518760 | orchestrator | + popd 2026-03-18 03:56:56.521024 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-18 03:56:56.589581 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-18 03:56:56.591239 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-18 03:56:56.685597 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 03:56:56.685695 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-18 03:56:56.691878 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-18 03:56:56.699483 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-18 03:56:56.760464 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-18 03:56:56.760531 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-18 03:56:56.867011 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-18 03:56:56.867075 | orchestrator | ++ echo true 2026-03-18 03:56:56.867423 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-18 03:56:56.869837 | orchestrator | +++ semver 2024.2 2024.2 2026-03-18 03:56:56.957511 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-18 03:56:56.957945 | orchestrator | +++ semver 2024.2 2025.1 2026-03-18 03:56:57.016609 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-18 03:56:57.016713 | orchestrator | ++ echo false 2026-03-18 03:56:57.017112 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-18 03:56:57.017235 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-18 03:56:57.017251 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-18 03:56:57.017342 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-18 03:56:57.017358 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-18 03:56:57.023436 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-18 03:56:57.023759 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-18 03:56:57.044521 | orchestrator | export RABBITMQ3TO4=true 2026-03-18 03:56:57.047388 | orchestrator | + osism update manager 2026-03-18 03:57:02.947884 | orchestrator | Collecting uv 2026-03-18 03:57:03.048592 | orchestrator | Downloading uv-0.10.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-18 03:57:03.068430 | orchestrator | Downloading uv-0.10.11-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.6 MB) 2026-03-18 03:57:03.962284 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.6/23.6 MB 33.8 MB/s eta 0:00:00 2026-03-18 03:57:04.035645 | orchestrator | Installing collected packages: uv 2026-03-18 03:57:04.499391 | orchestrator | Successfully installed uv-0.10.11 2026-03-18 03:57:05.092004 | orchestrator | Resolved 11 packages in 291ms 2026-03-18 03:57:05.130642 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-18 03:57:05.131152 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-18 03:57:05.132310 | orchestrator | Downloading ansible (54.5MiB) 2026-03-18 03:57:05.132804 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-18 03:57:05.471093 | orchestrator | Downloaded netaddr 2026-03-18 03:57:05.600336 | orchestrator | Downloaded cryptography 2026-03-18 03:57:05.642915 | orchestrator | Downloaded ansible-core 2026-03-18 03:57:12.425106 | orchestrator | Downloaded ansible 2026-03-18 03:57:12.425217 | orchestrator | Prepared 11 packages in 7.33s 2026-03-18 03:57:13.004507 | orchestrator | Installed 11 packages in 578ms 2026-03-18 03:57:13.005435 | orchestrator | + ansible==11.11.0 2026-03-18 03:57:13.005474 | orchestrator | + ansible-core==2.18.14 2026-03-18 03:57:13.005487 | orchestrator | + cffi==2.0.0 2026-03-18 03:57:13.005499 | orchestrator | + cryptography==46.0.5 2026-03-18 03:57:13.005510 | orchestrator | + jinja2==3.1.6 2026-03-18 03:57:13.005521 | orchestrator | + markupsafe==3.0.3 2026-03-18 03:57:13.005532 | orchestrator | + netaddr==1.3.0 2026-03-18 03:57:13.005543 | orchestrator | + packaging==26.0 2026-03-18 03:57:13.005554 | orchestrator | + pycparser==3.0 2026-03-18 03:57:13.005565 | orchestrator | + pyyaml==6.0.3 2026-03-18 03:57:13.005576 | orchestrator | + resolvelib==1.0.1 2026-03-18 03:57:14.067227 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-202772b2efad01/tmp7aty4d5b/ansible-collection-servicesonrvq29s'... 2026-03-18 03:57:15.672837 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-18 03:57:15.672935 | orchestrator | Already on 'main' 2026-03-18 03:57:16.140965 | orchestrator | Starting galaxy collection install process 2026-03-18 03:57:16.141064 | orchestrator | Process install dependency map 2026-03-18 03:57:16.141079 | orchestrator | Starting collection install process 2026-03-18 03:57:16.141092 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-18 03:57:16.141105 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-18 03:57:16.141117 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-18 03:57:16.622895 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-202882crzrty8y/tmpsyt63ohs/ansible-playbooks-manager48twy6eu'... 2026-03-18 03:57:17.203194 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-18 03:57:17.203298 | orchestrator | Already on 'main' 2026-03-18 03:57:17.561977 | orchestrator | Starting galaxy collection install process 2026-03-18 03:57:17.562146 | orchestrator | Process install dependency map 2026-03-18 03:57:17.562163 | orchestrator | Starting collection install process 2026-03-18 03:57:17.562176 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-18 03:57:17.562189 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-18 03:57:17.562201 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-18 03:57:18.212096 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-18 03:57:18.212974 | orchestrator | -vvvv to see details 2026-03-18 03:57:18.612566 | orchestrator | 2026-03-18 03:57:18.612669 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-18 03:57:18.612685 | orchestrator | 2026-03-18 03:57:18.612698 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-18 03:57:22.549152 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:22.549245 | orchestrator | 2026-03-18 03:57:22.549258 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-18 03:57:22.630081 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 03:57:22.630175 | orchestrator | 2026-03-18 03:57:22.630211 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-18 03:57:24.491088 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:24.491204 | orchestrator | 2026-03-18 03:57:24.491220 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-18 03:57:24.553124 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:24.553224 | orchestrator | 2026-03-18 03:57:24.553240 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-18 03:57:24.620365 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-18 03:57:24.620475 | orchestrator | 2026-03-18 03:57:24.620491 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-18 03:57:29.160264 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-18 03:57:29.160383 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-18 03:57:29.160399 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-18 03:57:29.160423 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-18 03:57:29.160435 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-18 03:57:29.160445 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-18 03:57:29.160456 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-18 03:57:29.160468 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-18 03:57:29.160479 | orchestrator | 2026-03-18 03:57:29.160491 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-18 03:57:30.192290 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:30.192400 | orchestrator | 2026-03-18 03:57:30.192416 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-18 03:57:31.201234 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:31.201365 | orchestrator | 2026-03-18 03:57:31.201391 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-18 03:57:31.310255 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-18 03:57:31.310348 | orchestrator | 2026-03-18 03:57:31.310362 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-18 03:57:33.268151 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-18 03:57:33.268264 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-18 03:57:33.268282 | orchestrator | 2026-03-18 03:57:33.268298 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-18 03:57:34.268154 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:34.268256 | orchestrator | 2026-03-18 03:57:34.268273 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-18 03:57:34.340478 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:57:34.340592 | orchestrator | 2026-03-18 03:57:34.340618 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-18 03:57:34.435278 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-18 03:57:34.435376 | orchestrator | 2026-03-18 03:57:34.435391 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-18 03:57:35.460631 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:35.461869 | orchestrator | 2026-03-18 03:57:35.461940 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-18 03:57:35.547256 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-18 03:57:35.547350 | orchestrator | 2026-03-18 03:57:35.547365 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-18 03:57:37.513765 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-18 03:57:37.513872 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-18 03:57:37.513889 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:37.513903 | orchestrator | 2026-03-18 03:57:37.513915 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-18 03:57:38.512520 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:38.512615 | orchestrator | 2026-03-18 03:57:38.512630 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-18 03:57:38.581099 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:57:38.581194 | orchestrator | 2026-03-18 03:57:38.581209 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-18 03:57:38.700093 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-18 03:57:38.700187 | orchestrator | 2026-03-18 03:57:38.700201 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-18 03:57:39.421850 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:39.421958 | orchestrator | 2026-03-18 03:57:39.421976 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-18 03:57:39.995759 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:39.995867 | orchestrator | 2026-03-18 03:57:39.995883 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-18 03:57:41.926336 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-18 03:57:41.926451 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-18 03:57:41.926466 | orchestrator | 2026-03-18 03:57:41.926479 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-18 03:57:43.187854 | orchestrator | changed: [testbed-manager] 2026-03-18 03:57:43.187971 | orchestrator | 2026-03-18 03:57:43.187993 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-18 03:57:43.772643 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:43.772769 | orchestrator | 2026-03-18 03:57:43.772782 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-18 03:57:44.321163 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:44.321264 | orchestrator | 2026-03-18 03:57:44.321303 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-18 03:57:44.375636 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:57:44.375758 | orchestrator | 2026-03-18 03:57:44.375775 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-18 03:57:44.459408 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-18 03:57:44.459513 | orchestrator | 2026-03-18 03:57:44.459538 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-18 03:57:44.514045 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:44.514101 | orchestrator | 2026-03-18 03:57:44.514108 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-18 03:57:47.088074 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-18 03:57:47.088826 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-18 03:57:47.088858 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-18 03:57:47.088873 | orchestrator | 2026-03-18 03:57:47.088887 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-18 03:57:47.963104 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:47.963209 | orchestrator | 2026-03-18 03:57:47.963235 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-18 03:57:48.874998 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:48.875061 | orchestrator | 2026-03-18 03:57:48.875070 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-18 03:57:49.803484 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:49.803581 | orchestrator | 2026-03-18 03:57:49.803598 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-18 03:57:49.865372 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-18 03:57:49.865453 | orchestrator | 2026-03-18 03:57:49.865468 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-18 03:57:49.910933 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:49.911012 | orchestrator | 2026-03-18 03:57:49.911027 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-18 03:57:50.777650 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-18 03:57:50.777797 | orchestrator | 2026-03-18 03:57:50.777816 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-18 03:57:50.857034 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-18 03:57:50.857107 | orchestrator | 2026-03-18 03:57:50.857120 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-18 03:57:51.747284 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:51.747373 | orchestrator | 2026-03-18 03:57:51.747390 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-18 03:57:52.721366 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:52.721483 | orchestrator | 2026-03-18 03:57:52.721503 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-18 03:57:52.801037 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:57:52.801118 | orchestrator | 2026-03-18 03:57:52.801133 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-18 03:57:52.865779 | orchestrator | ok: [testbed-manager] 2026-03-18 03:57:52.865877 | orchestrator | 2026-03-18 03:57:52.865899 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-18 03:57:54.096471 | orchestrator | changed: [testbed-manager] 2026-03-18 03:57:54.096578 | orchestrator | 2026-03-18 03:57:54.096607 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-18 03:59:00.080144 | orchestrator | changed: [testbed-manager] 2026-03-18 03:59:00.080259 | orchestrator | 2026-03-18 03:59:00.080276 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-18 03:59:01.406324 | orchestrator | ok: [testbed-manager] 2026-03-18 03:59:01.406467 | orchestrator | 2026-03-18 03:59:01.406484 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-18 03:59:01.481613 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:59:01.481805 | orchestrator | 2026-03-18 03:59:01.481825 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-18 03:59:02.390416 | orchestrator | ok: [testbed-manager] 2026-03-18 03:59:02.390511 | orchestrator | 2026-03-18 03:59:02.390525 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-18 03:59:02.470234 | orchestrator | skipping: [testbed-manager] 2026-03-18 03:59:02.470317 | orchestrator | 2026-03-18 03:59:02.470327 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-18 03:59:02.470335 | orchestrator | 2026-03-18 03:59:02.470342 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-18 03:59:17.244763 | orchestrator | changed: [testbed-manager] 2026-03-18 03:59:17.244881 | orchestrator | 2026-03-18 03:59:17.244898 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-18 04:00:17.308799 | orchestrator | Pausing for 60 seconds 2026-03-18 04:00:17.308902 | orchestrator | changed: [testbed-manager] 2026-03-18 04:00:17.308918 | orchestrator | 2026-03-18 04:00:17.308930 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-18 04:00:17.363989 | orchestrator | ok: [testbed-manager] 2026-03-18 04:00:17.364099 | orchestrator | 2026-03-18 04:00:17.364123 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-18 04:00:20.868899 | orchestrator | changed: [testbed-manager] 2026-03-18 04:00:20.869003 | orchestrator | 2026-03-18 04:00:20.869021 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-18 04:01:23.727095 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-18 04:01:23.727173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-18 04:01:23.727180 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-18 04:01:23.727186 | orchestrator | changed: [testbed-manager] 2026-03-18 04:01:23.727192 | orchestrator | 2026-03-18 04:01:23.727200 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-18 04:01:34.283422 | orchestrator | changed: [testbed-manager] 2026-03-18 04:01:34.283518 | orchestrator | 2026-03-18 04:01:34.283533 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-18 04:01:34.370233 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-18 04:01:34.370354 | orchestrator | 2026-03-18 04:01:34.370378 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-18 04:01:34.370397 | orchestrator | 2026-03-18 04:01:34.370417 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-18 04:01:34.432712 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:01:34.432811 | orchestrator | 2026-03-18 04:01:34.432834 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-18 04:01:34.508552 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-18 04:01:34.508669 | orchestrator | 2026-03-18 04:01:34.508695 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-18 04:01:35.514186 | orchestrator | changed: [testbed-manager] 2026-03-18 04:01:35.514278 | orchestrator | 2026-03-18 04:01:35.514295 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-18 04:01:39.136270 | orchestrator | ok: [testbed-manager] 2026-03-18 04:01:39.136376 | orchestrator | 2026-03-18 04:01:39.136393 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-18 04:01:39.225407 | orchestrator | ok: [testbed-manager] => { 2026-03-18 04:01:39.225501 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-18 04:01:39.225517 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-18 04:01:39.225529 | orchestrator | "Checking running containers against expected versions...", 2026-03-18 04:01:39.225541 | orchestrator | "", 2026-03-18 04:01:39.225552 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-18 04:01:39.225563 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-18 04:01:39.225575 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.225586 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-18 04:01:39.225597 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.225608 | orchestrator | "", 2026-03-18 04:01:39.225618 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-18 04:01:39.225713 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-18 04:01:39.225728 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.225739 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-18 04:01:39.225750 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.225761 | orchestrator | "", 2026-03-18 04:01:39.225772 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-18 04:01:39.225783 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-18 04:01:39.225793 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.225804 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-18 04:01:39.225815 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.225825 | orchestrator | "", 2026-03-18 04:01:39.225836 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-18 04:01:39.225847 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-18 04:01:39.225858 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.225868 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-18 04:01:39.225879 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.225889 | orchestrator | "", 2026-03-18 04:01:39.225901 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-18 04:01:39.225912 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-18 04:01:39.225922 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.225937 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-18 04:01:39.225949 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.225961 | orchestrator | "", 2026-03-18 04:01:39.225974 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-18 04:01:39.226007 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226066 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226079 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226089 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226100 | orchestrator | "", 2026-03-18 04:01:39.226111 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-18 04:01:39.226122 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-18 04:01:39.226133 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226143 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-18 04:01:39.226154 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226165 | orchestrator | "", 2026-03-18 04:01:39.226176 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-18 04:01:39.226187 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-18 04:01:39.226197 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226218 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-18 04:01:39.226229 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226240 | orchestrator | "", 2026-03-18 04:01:39.226251 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-18 04:01:39.226262 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-18 04:01:39.226273 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226287 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-18 04:01:39.226306 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226324 | orchestrator | "", 2026-03-18 04:01:39.226346 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-18 04:01:39.226364 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-18 04:01:39.226381 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226398 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-18 04:01:39.226413 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226429 | orchestrator | "", 2026-03-18 04:01:39.226446 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-18 04:01:39.226464 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226482 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226498 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226516 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226535 | orchestrator | "", 2026-03-18 04:01:39.226550 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-18 04:01:39.226561 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226571 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226582 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226593 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226603 | orchestrator | "", 2026-03-18 04:01:39.226614 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-18 04:01:39.226625 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226665 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226676 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226686 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226697 | orchestrator | "", 2026-03-18 04:01:39.226708 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-18 04:01:39.226719 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226729 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226740 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226774 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226794 | orchestrator | "", 2026-03-18 04:01:39.226811 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-18 04:01:39.226828 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226859 | orchestrator | " Enabled: true", 2026-03-18 04:01:39.226876 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-18 04:01:39.226895 | orchestrator | " Status: ✅ MATCH", 2026-03-18 04:01:39.226914 | orchestrator | "", 2026-03-18 04:01:39.226933 | orchestrator | "=== Summary ===", 2026-03-18 04:01:39.226950 | orchestrator | "Errors (version mismatches): 0", 2026-03-18 04:01:39.226962 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-18 04:01:39.226973 | orchestrator | "", 2026-03-18 04:01:39.226983 | orchestrator | "✅ All running containers match expected versions!" 2026-03-18 04:01:39.226994 | orchestrator | ] 2026-03-18 04:01:39.227005 | orchestrator | } 2026-03-18 04:01:39.227016 | orchestrator | 2026-03-18 04:01:39.227027 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-18 04:01:39.299058 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:01:39.299868 | orchestrator | 2026-03-18 04:01:39.299905 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:01:39.299922 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-18 04:01:39.299936 | orchestrator | 2026-03-18 04:01:51.951508 | orchestrator | 2026-03-18 04:01:51 | INFO  | Task c670420e-328e-4d5a-b421-d7ab5ffb346a (sync inventory) is running in background. Output coming soon. 2026-03-18 04:02:21.145089 | orchestrator | 2026-03-18 04:01:53 | INFO  | Starting group_vars file reorganization 2026-03-18 04:02:21.145206 | orchestrator | 2026-03-18 04:01:53 | INFO  | Moved 0 file(s) to their respective directories 2026-03-18 04:02:21.145223 | orchestrator | 2026-03-18 04:01:53 | INFO  | Group_vars file reorganization completed 2026-03-18 04:02:21.145258 | orchestrator | 2026-03-18 04:01:56 | INFO  | Starting variable preparation from inventory 2026-03-18 04:02:21.145271 | orchestrator | 2026-03-18 04:01:59 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-18 04:02:21.145283 | orchestrator | 2026-03-18 04:01:59 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-18 04:02:21.145294 | orchestrator | 2026-03-18 04:01:59 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-18 04:02:21.145305 | orchestrator | 2026-03-18 04:01:59 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-18 04:02:21.145323 | orchestrator | 2026-03-18 04:01:59 | INFO  | Variable preparation completed 2026-03-18 04:02:21.145343 | orchestrator | 2026-03-18 04:02:01 | INFO  | Starting inventory overwrite handling 2026-03-18 04:02:21.145362 | orchestrator | 2026-03-18 04:02:01 | INFO  | Handling group overwrites in 99-overwrite 2026-03-18 04:02:21.145384 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removing group frr:children from 60-generic 2026-03-18 04:02:21.145406 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-18 04:02:21.145428 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-18 04:02:21.145448 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-18 04:02:21.145464 | orchestrator | 2026-03-18 04:02:01 | INFO  | Handling group overwrites in 20-roles 2026-03-18 04:02:21.145475 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-18 04:02:21.145486 | orchestrator | 2026-03-18 04:02:01 | INFO  | Removed 5 group(s) in total 2026-03-18 04:02:21.145497 | orchestrator | 2026-03-18 04:02:01 | INFO  | Inventory overwrite handling completed 2026-03-18 04:02:21.145509 | orchestrator | 2026-03-18 04:02:02 | INFO  | Starting merge of inventory files 2026-03-18 04:02:21.145520 | orchestrator | 2026-03-18 04:02:02 | INFO  | Inventory files merged successfully 2026-03-18 04:02:21.145554 | orchestrator | 2026-03-18 04:02:07 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-18 04:02:21.145566 | orchestrator | 2026-03-18 04:02:19 | INFO  | Successfully wrote ClusterShell configuration 2026-03-18 04:02:21.455298 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 04:02:21.455393 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-18 04:02:21.455407 | orchestrator | + local max_attempts=60 2026-03-18 04:02:21.455420 | orchestrator | + local name=kolla-ansible 2026-03-18 04:02:21.455430 | orchestrator | + local attempt_num=1 2026-03-18 04:02:21.455441 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-18 04:02:21.487827 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 04:02:21.487910 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-18 04:02:21.487923 | orchestrator | + local max_attempts=60 2026-03-18 04:02:21.487935 | orchestrator | + local name=osism-ansible 2026-03-18 04:02:21.488200 | orchestrator | + local attempt_num=1 2026-03-18 04:02:21.488508 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-18 04:02:21.519968 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-18 04:02:21.520058 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-18 04:02:21.721425 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-18 04:02:21.721523 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-18 04:02:21.721538 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-18 04:02:21.721549 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-18 04:02:21.721565 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-18 04:02:21.721575 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-18 04:02:21.721585 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-18 04:02:21.721597 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-18 04:02:21.721606 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 16 seconds ago 2026-03-18 04:02:21.721617 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-18 04:02:21.721671 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-18 04:02:21.721684 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-18 04:02:21.721694 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-18 04:02:21.721731 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-18 04:02:21.721741 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-18 04:02:21.721752 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-18 04:02:21.726655 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-18 04:02:21.726695 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-18 04:02:21.726702 | orchestrator | + osism apply facts 2026-03-18 04:02:33.980150 | orchestrator | 2026-03-18 04:02:33 | INFO  | Task 3dbe825d-dca4-4d80-bd69-d03148e7c3ea (facts) was prepared for execution. 2026-03-18 04:02:33.980290 | orchestrator | 2026-03-18 04:02:33 | INFO  | It takes a moment until task 3dbe825d-dca4-4d80-bd69-d03148e7c3ea (facts) has been started and output is visible here. 2026-03-18 04:02:58.695947 | orchestrator | 2026-03-18 04:02:58.696059 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-18 04:02:58.696075 | orchestrator | 2026-03-18 04:02:58.696085 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-18 04:02:58.696096 | orchestrator | Wednesday 18 March 2026 04:02:40 +0000 (0:00:02.433) 0:00:02.433 ******* 2026-03-18 04:02:58.696106 | orchestrator | ok: [testbed-manager] 2026-03-18 04:02:58.696117 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:02:58.696127 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:02:58.696136 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:02:58.696146 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:02:58.696155 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:02:58.696165 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:02:58.696174 | orchestrator | 2026-03-18 04:02:58.696184 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-18 04:02:58.696194 | orchestrator | Wednesday 18 March 2026 04:02:44 +0000 (0:00:03.819) 0:00:06.253 ******* 2026-03-18 04:02:58.696203 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:02:58.696215 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:02:58.696224 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:02:58.696234 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:02:58.696243 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:02:58.696252 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:02:58.696262 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:02:58.696272 | orchestrator | 2026-03-18 04:02:58.696282 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-18 04:02:58.696291 | orchestrator | 2026-03-18 04:02:58.696301 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-18 04:02:58.696310 | orchestrator | Wednesday 18 March 2026 04:02:47 +0000 (0:00:02.731) 0:00:08.984 ******* 2026-03-18 04:02:58.696320 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:02:58.696349 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:02:58.696361 | orchestrator | ok: [testbed-manager] 2026-03-18 04:02:58.696412 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:02:58.696428 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:02:58.696438 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:02:58.696448 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:02:58.696457 | orchestrator | 2026-03-18 04:02:58.696467 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-18 04:02:58.696477 | orchestrator | 2026-03-18 04:02:58.696486 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-18 04:02:58.696498 | orchestrator | Wednesday 18 March 2026 04:02:55 +0000 (0:00:08.186) 0:00:17.170 ******* 2026-03-18 04:02:58.696509 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:02:58.696543 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:02:58.696555 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:02:58.696566 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:02:58.696577 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:02:58.696588 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:02:58.696599 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:02:58.696610 | orchestrator | 2026-03-18 04:02:58.696645 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:02:58.696658 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696671 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696682 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696693 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696704 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696715 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696725 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:02:58.696737 | orchestrator | 2026-03-18 04:02:58.696748 | orchestrator | 2026-03-18 04:02:58.696757 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:02:58.696767 | orchestrator | Wednesday 18 March 2026 04:02:58 +0000 (0:00:02.568) 0:00:19.739 ******* 2026-03-18 04:02:58.696777 | orchestrator | =============================================================================== 2026-03-18 04:02:58.696786 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.19s 2026-03-18 04:02:58.696796 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.82s 2026-03-18 04:02:58.696805 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.73s 2026-03-18 04:02:58.696815 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.57s 2026-03-18 04:02:59.023468 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-18 04:02:59.121832 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 04:02:59.122665 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-18 04:02:59.168939 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-18 04:02:59.169024 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-18 04:02:59.176571 | orchestrator | + set -e 2026-03-18 04:02:59.176658 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-18 04:02:59.176673 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-18 04:02:59.184492 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-18 04:02:59.194416 | orchestrator | 2026-03-18 04:02:59.194476 | orchestrator | # UPGRADE SERVICES 2026-03-18 04:02:59.194489 | orchestrator | 2026-03-18 04:02:59.194500 | orchestrator | + set -e 2026-03-18 04:02:59.194511 | orchestrator | + echo 2026-03-18 04:02:59.194522 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-18 04:02:59.194533 | orchestrator | + echo 2026-03-18 04:02:59.194544 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 04:02:59.195723 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 04:02:59.195746 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 04:02:59.195757 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 04:02:59.195768 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 04:02:59.195779 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 04:02:59.195792 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 04:02:59.195803 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 04:02:59.195838 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 04:02:59.195850 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 04:02:59.195861 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 04:02:59.195872 | orchestrator | ++ export ARA=false 2026-03-18 04:02:59.195882 | orchestrator | ++ ARA=false 2026-03-18 04:02:59.195893 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 04:02:59.195903 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 04:02:59.195914 | orchestrator | ++ export TEMPEST=false 2026-03-18 04:02:59.195933 | orchestrator | ++ TEMPEST=false 2026-03-18 04:02:59.195952 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 04:02:59.195969 | orchestrator | ++ IS_ZUUL=true 2026-03-18 04:02:59.195987 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 04:02:59.196007 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 04:02:59.196026 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 04:02:59.196038 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 04:02:59.196049 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 04:02:59.196059 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 04:02:59.196070 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 04:02:59.196080 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 04:02:59.196091 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 04:02:59.196101 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 04:02:59.196112 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-18 04:02:59.196122 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-18 04:02:59.196133 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-18 04:02:59.196143 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-18 04:02:59.196154 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-18 04:02:59.205169 | orchestrator | + set -e 2026-03-18 04:02:59.205215 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 04:02:59.206783 | orchestrator | 2026-03-18 04:02:59.206808 | orchestrator | # PULL IMAGES 2026-03-18 04:02:59.206820 | orchestrator | 2026-03-18 04:02:59.206831 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 04:02:59.206842 | orchestrator | ++ INTERACTIVE=false 2026-03-18 04:02:59.206853 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 04:02:59.206863 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 04:02:59.206874 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 04:02:59.206884 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 04:02:59.206895 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 04:02:59.206906 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 04:02:59.206917 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 04:02:59.206928 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 04:02:59.206960 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 04:02:59.206972 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 04:02:59.206983 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 04:02:59.206994 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 04:02:59.207005 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 04:02:59.207016 | orchestrator | ++ export ARA=false 2026-03-18 04:02:59.207027 | orchestrator | ++ ARA=false 2026-03-18 04:02:59.207037 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 04:02:59.207048 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 04:02:59.207058 | orchestrator | ++ export TEMPEST=false 2026-03-18 04:02:59.207069 | orchestrator | ++ TEMPEST=false 2026-03-18 04:02:59.207079 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 04:02:59.207090 | orchestrator | ++ IS_ZUUL=true 2026-03-18 04:02:59.207101 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 04:02:59.207111 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 04:02:59.207122 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 04:02:59.207134 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 04:02:59.207145 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 04:02:59.207156 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 04:02:59.207166 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 04:02:59.207177 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 04:02:59.207188 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 04:02:59.207198 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 04:02:59.207209 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-18 04:02:59.207220 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-18 04:02:59.207230 | orchestrator | + echo 2026-03-18 04:02:59.207241 | orchestrator | + echo '# PULL IMAGES' 2026-03-18 04:02:59.207252 | orchestrator | + echo 2026-03-18 04:02:59.207507 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-18 04:02:59.266199 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 04:02:59.266289 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-18 04:03:01.388559 | orchestrator | 2026-03-18 04:03:01 | INFO  | Trying to run play pull-images in environment custom 2026-03-18 04:03:11.582939 | orchestrator | 2026-03-18 04:03:11 | INFO  | Task 4cb216b8-3d39-4988-b30c-3b5d0d66c4ef (pull-images) was prepared for execution. 2026-03-18 04:03:11.583084 | orchestrator | 2026-03-18 04:03:11 | INFO  | Task 4cb216b8-3d39-4988-b30c-3b5d0d66c4ef is running in background. No more output. Check ARA for logs. 2026-03-18 04:03:11.919471 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-18 04:03:11.929891 | orchestrator | + set -e 2026-03-18 04:03:11.929971 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 04:03:11.929987 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 04:03:11.929999 | orchestrator | ++ INTERACTIVE=false 2026-03-18 04:03:11.930010 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 04:03:11.930080 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 04:03:11.930093 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-18 04:03:11.932116 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-18 04:03:11.943179 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-18 04:03:11.943252 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-18 04:03:11.943962 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-18 04:03:11.991558 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-18 04:03:11.991715 | orchestrator | + osism apply frr 2026-03-18 04:03:24.211433 | orchestrator | 2026-03-18 04:03:24 | INFO  | Task 130a8761-57f7-4351-8b9a-1f310d99d3c3 (frr) was prepared for execution. 2026-03-18 04:03:24.211528 | orchestrator | 2026-03-18 04:03:24 | INFO  | It takes a moment until task 130a8761-57f7-4351-8b9a-1f310d99d3c3 (frr) has been started and output is visible here. 2026-03-18 04:03:57.808960 | orchestrator | 2026-03-18 04:03:57.809102 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-18 04:03:57.809118 | orchestrator | 2026-03-18 04:03:57.809128 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-18 04:03:57.809137 | orchestrator | Wednesday 18 March 2026 04:03:31 +0000 (0:00:03.203) 0:00:03.203 ******* 2026-03-18 04:03:57.809147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 04:03:57.809157 | orchestrator | 2026-03-18 04:03:57.809166 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-18 04:03:57.809175 | orchestrator | Wednesday 18 March 2026 04:03:34 +0000 (0:00:02.733) 0:00:05.936 ******* 2026-03-18 04:03:57.809184 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809194 | orchestrator | 2026-03-18 04:03:57.809203 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-18 04:03:57.809211 | orchestrator | Wednesday 18 March 2026 04:03:37 +0000 (0:00:02.439) 0:00:08.376 ******* 2026-03-18 04:03:57.809220 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809229 | orchestrator | 2026-03-18 04:03:57.809237 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-18 04:03:57.809246 | orchestrator | Wednesday 18 March 2026 04:03:39 +0000 (0:00:02.842) 0:00:11.219 ******* 2026-03-18 04:03:57.809255 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809264 | orchestrator | 2026-03-18 04:03:57.809273 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-18 04:03:57.809282 | orchestrator | Wednesday 18 March 2026 04:03:41 +0000 (0:00:01.950) 0:00:13.169 ******* 2026-03-18 04:03:57.809290 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809299 | orchestrator | 2026-03-18 04:03:57.809307 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-18 04:03:57.809316 | orchestrator | Wednesday 18 March 2026 04:03:43 +0000 (0:00:01.892) 0:00:15.061 ******* 2026-03-18 04:03:57.809325 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809333 | orchestrator | 2026-03-18 04:03:57.809342 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-18 04:03:57.809351 | orchestrator | Wednesday 18 March 2026 04:03:46 +0000 (0:00:02.575) 0:00:17.636 ******* 2026-03-18 04:03:57.809360 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:03:57.809394 | orchestrator | 2026-03-18 04:03:57.809404 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-18 04:03:57.809412 | orchestrator | Wednesday 18 March 2026 04:03:47 +0000 (0:00:01.168) 0:00:18.805 ******* 2026-03-18 04:03:57.809421 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:03:57.809429 | orchestrator | 2026-03-18 04:03:57.809438 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-18 04:03:57.809446 | orchestrator | Wednesday 18 March 2026 04:03:48 +0000 (0:00:01.207) 0:00:20.012 ******* 2026-03-18 04:03:57.809455 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809464 | orchestrator | 2026-03-18 04:03:57.809472 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-18 04:03:57.809481 | orchestrator | Wednesday 18 March 2026 04:03:50 +0000 (0:00:02.151) 0:00:22.164 ******* 2026-03-18 04:03:57.809490 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-18 04:03:57.809498 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-18 04:03:57.809509 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-18 04:03:57.809520 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-18 04:03:57.809531 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-18 04:03:57.809557 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-18 04:03:57.809568 | orchestrator | 2026-03-18 04:03:57.809578 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-18 04:03:57.809588 | orchestrator | Wednesday 18 March 2026 04:03:54 +0000 (0:00:03.800) 0:00:25.965 ******* 2026-03-18 04:03:57.809599 | orchestrator | ok: [testbed-manager] 2026-03-18 04:03:57.809609 | orchestrator | 2026-03-18 04:03:57.809619 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:03:57.809629 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 04:03:57.809639 | orchestrator | 2026-03-18 04:03:57.809648 | orchestrator | 2026-03-18 04:03:57.809658 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:03:57.809668 | orchestrator | Wednesday 18 March 2026 04:03:57 +0000 (0:00:02.635) 0:00:28.601 ******* 2026-03-18 04:03:57.809679 | orchestrator | =============================================================================== 2026-03-18 04:03:57.809713 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.80s 2026-03-18 04:03:57.809723 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.84s 2026-03-18 04:03:57.809733 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.73s 2026-03-18 04:03:57.809743 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.64s 2026-03-18 04:03:57.809753 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.58s 2026-03-18 04:03:57.809763 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.44s 2026-03-18 04:03:57.809773 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.15s 2026-03-18 04:03:57.809783 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.95s 2026-03-18 04:03:57.809809 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.89s 2026-03-18 04:03:57.809820 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.21s 2026-03-18 04:03:57.809830 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.17s 2026-03-18 04:03:58.145132 | orchestrator | + osism apply kubernetes 2026-03-18 04:04:00.248086 | orchestrator | 2026-03-18 04:04:00 | INFO  | Task 399b738c-1620-4717-954f-cece67e91006 (kubernetes) was prepared for execution. 2026-03-18 04:04:00.248262 | orchestrator | 2026-03-18 04:04:00 | INFO  | It takes a moment until task 399b738c-1620-4717-954f-cece67e91006 (kubernetes) has been started and output is visible here. 2026-03-18 04:04:45.197360 | orchestrator | 2026-03-18 04:04:45.197501 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-18 04:04:45.197518 | orchestrator | 2026-03-18 04:04:45.197530 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-18 04:04:45.197543 | orchestrator | Wednesday 18 March 2026 04:04:07 +0000 (0:00:02.197) 0:00:02.197 ******* 2026-03-18 04:04:45.197554 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.197566 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.197577 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.197589 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.197600 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.197611 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.197622 | orchestrator | 2026-03-18 04:04:45.197633 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-18 04:04:45.197644 | orchestrator | Wednesday 18 March 2026 04:04:11 +0000 (0:00:04.650) 0:00:06.848 ******* 2026-03-18 04:04:45.197655 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.197667 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.197678 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.197689 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.197700 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.197711 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.197721 | orchestrator | 2026-03-18 04:04:45.197733 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-18 04:04:45.197744 | orchestrator | Wednesday 18 March 2026 04:04:13 +0000 (0:00:02.052) 0:00:08.901 ******* 2026-03-18 04:04:45.197755 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.197766 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.197777 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.197787 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.197798 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.197809 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.197820 | orchestrator | 2026-03-18 04:04:45.197832 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-18 04:04:45.197843 | orchestrator | Wednesday 18 March 2026 04:04:15 +0000 (0:00:02.229) 0:00:11.130 ******* 2026-03-18 04:04:45.197854 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.197865 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.197876 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.197889 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.197902 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.197963 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.197977 | orchestrator | 2026-03-18 04:04:45.197989 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-18 04:04:45.198002 | orchestrator | Wednesday 18 March 2026 04:04:19 +0000 (0:00:03.062) 0:00:14.193 ******* 2026-03-18 04:04:45.198095 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.198109 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.198122 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.198134 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.198165 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.198177 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.198188 | orchestrator | 2026-03-18 04:04:45.198199 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-18 04:04:45.198210 | orchestrator | Wednesday 18 March 2026 04:04:21 +0000 (0:00:02.588) 0:00:16.781 ******* 2026-03-18 04:04:45.198221 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.198232 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.198243 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.198254 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.198265 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.198297 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.198309 | orchestrator | 2026-03-18 04:04:45.198320 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-18 04:04:45.198439 | orchestrator | Wednesday 18 March 2026 04:04:23 +0000 (0:00:02.137) 0:00:18.919 ******* 2026-03-18 04:04:45.198455 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.198466 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.198478 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.198489 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.198500 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.198510 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.198521 | orchestrator | 2026-03-18 04:04:45.198532 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-18 04:04:45.198543 | orchestrator | Wednesday 18 March 2026 04:04:25 +0000 (0:00:02.020) 0:00:20.940 ******* 2026-03-18 04:04:45.198554 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.198565 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.198575 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.198586 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.198596 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.198607 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.198618 | orchestrator | 2026-03-18 04:04:45.198629 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-18 04:04:45.198639 | orchestrator | Wednesday 18 March 2026 04:04:27 +0000 (0:00:01.702) 0:00:22.643 ******* 2026-03-18 04:04:45.198650 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198661 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198671 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.198693 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198704 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198715 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.198726 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198737 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198747 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.198758 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198769 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198780 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.198810 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198821 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198832 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.198843 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-18 04:04:45.198853 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-18 04:04:45.198864 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.198875 | orchestrator | 2026-03-18 04:04:45.198886 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-18 04:04:45.198897 | orchestrator | Wednesday 18 March 2026 04:04:29 +0000 (0:00:02.049) 0:00:24.693 ******* 2026-03-18 04:04:45.199070 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199107 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199119 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.199129 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.199140 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.199151 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.199161 | orchestrator | 2026-03-18 04:04:45.199186 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-18 04:04:45.199198 | orchestrator | Wednesday 18 March 2026 04:04:31 +0000 (0:00:02.356) 0:00:27.049 ******* 2026-03-18 04:04:45.199209 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.199219 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.199230 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.199241 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.199251 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.199261 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.199272 | orchestrator | 2026-03-18 04:04:45.199283 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-18 04:04:45.199294 | orchestrator | Wednesday 18 March 2026 04:04:33 +0000 (0:00:01.958) 0:00:29.008 ******* 2026-03-18 04:04:45.199304 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:04:45.199315 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:04:45.199325 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:04:45.199336 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:04:45.199346 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:04:45.199357 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:04:45.199367 | orchestrator | 2026-03-18 04:04:45.199378 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-18 04:04:45.199389 | orchestrator | Wednesday 18 March 2026 04:04:36 +0000 (0:00:02.793) 0:00:31.801 ******* 2026-03-18 04:04:45.199399 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199410 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199421 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.199431 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.199442 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.199452 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.199463 | orchestrator | 2026-03-18 04:04:45.199474 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-18 04:04:45.199484 | orchestrator | Wednesday 18 March 2026 04:04:38 +0000 (0:00:01.979) 0:00:33.781 ******* 2026-03-18 04:04:45.199493 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199503 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199512 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.199521 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.199531 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.199540 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.199549 | orchestrator | 2026-03-18 04:04:45.199559 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-18 04:04:45.199570 | orchestrator | Wednesday 18 March 2026 04:04:40 +0000 (0:00:02.159) 0:00:35.941 ******* 2026-03-18 04:04:45.199580 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199589 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199599 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.199609 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.199618 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.199628 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.199637 | orchestrator | 2026-03-18 04:04:45.199651 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-18 04:04:45.199661 | orchestrator | Wednesday 18 March 2026 04:04:42 +0000 (0:00:01.882) 0:00:37.823 ******* 2026-03-18 04:04:45.199671 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-18 04:04:45.199680 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-18 04:04:45.199690 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199699 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-18 04:04:45.199709 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-18 04:04:45.199718 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199728 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-18 04:04:45.199737 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-18 04:04:45.199752 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:04:45.199762 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-18 04:04:45.199771 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-18 04:04:45.199781 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:04:45.199790 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-18 04:04:45.199800 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-18 04:04:45.199809 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:04:45.199818 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-18 04:04:45.199828 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-18 04:04:45.199837 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:04:45.199847 | orchestrator | 2026-03-18 04:04:45.199857 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-18 04:04:45.199866 | orchestrator | Wednesday 18 March 2026 04:04:44 +0000 (0:00:02.002) 0:00:39.826 ******* 2026-03-18 04:04:45.199876 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:04:45.199885 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:04:45.199935 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:06:40.471819 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.471938 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.471956 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.471968 | orchestrator | 2026-03-18 04:06:40.471981 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-18 04:06:40.471994 | orchestrator | Wednesday 18 March 2026 04:04:46 +0000 (0:00:01.972) 0:00:41.798 ******* 2026-03-18 04:06:40.472006 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:06:40.472017 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:06:40.472028 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:06:40.472039 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.472049 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.472060 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.472071 | orchestrator | 2026-03-18 04:06:40.472082 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-18 04:06:40.472093 | orchestrator | 2026-03-18 04:06:40.472104 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-18 04:06:40.472116 | orchestrator | Wednesday 18 March 2026 04:04:49 +0000 (0:00:02.801) 0:00:44.600 ******* 2026-03-18 04:06:40.472127 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472139 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472150 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472161 | orchestrator | 2026-03-18 04:06:40.472173 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-18 04:06:40.472184 | orchestrator | Wednesday 18 March 2026 04:04:51 +0000 (0:00:01.941) 0:00:46.541 ******* 2026-03-18 04:06:40.472195 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472206 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472217 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472228 | orchestrator | 2026-03-18 04:06:40.472239 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-18 04:06:40.472249 | orchestrator | Wednesday 18 March 2026 04:04:54 +0000 (0:00:03.061) 0:00:49.603 ******* 2026-03-18 04:06:40.472260 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.472289 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:06:40.472301 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:06:40.472312 | orchestrator | 2026-03-18 04:06:40.472329 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-18 04:06:40.472342 | orchestrator | Wednesday 18 March 2026 04:04:56 +0000 (0:00:02.113) 0:00:51.716 ******* 2026-03-18 04:06:40.472355 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472409 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472429 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472481 | orchestrator | 2026-03-18 04:06:40.472497 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-18 04:06:40.472510 | orchestrator | Wednesday 18 March 2026 04:04:58 +0000 (0:00:01.931) 0:00:53.648 ******* 2026-03-18 04:06:40.472522 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.472535 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.472547 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.472560 | orchestrator | 2026-03-18 04:06:40.472572 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-18 04:06:40.472584 | orchestrator | Wednesday 18 March 2026 04:04:59 +0000 (0:00:01.327) 0:00:54.975 ******* 2026-03-18 04:06:40.472597 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472610 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472623 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472635 | orchestrator | 2026-03-18 04:06:40.472648 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-18 04:06:40.472660 | orchestrator | Wednesday 18 March 2026 04:05:01 +0000 (0:00:01.801) 0:00:56.777 ******* 2026-03-18 04:06:40.472674 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472688 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472700 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472712 | orchestrator | 2026-03-18 04:06:40.472730 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-18 04:06:40.472747 | orchestrator | Wednesday 18 March 2026 04:05:03 +0000 (0:00:02.311) 0:00:59.088 ******* 2026-03-18 04:06:40.472765 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:06:40.472782 | orchestrator | 2026-03-18 04:06:40.472799 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-18 04:06:40.472816 | orchestrator | Wednesday 18 March 2026 04:05:05 +0000 (0:00:01.999) 0:01:01.088 ******* 2026-03-18 04:06:40.472836 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472854 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.472873 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.472886 | orchestrator | 2026-03-18 04:06:40.472897 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-18 04:06:40.472908 | orchestrator | Wednesday 18 March 2026 04:05:08 +0000 (0:00:02.393) 0:01:03.481 ******* 2026-03-18 04:06:40.472919 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.472930 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.472941 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.472952 | orchestrator | 2026-03-18 04:06:40.472962 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-18 04:06:40.472973 | orchestrator | Wednesday 18 March 2026 04:05:09 +0000 (0:00:01.575) 0:01:05.057 ******* 2026-03-18 04:06:40.472984 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.472995 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.473005 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.473016 | orchestrator | 2026-03-18 04:06:40.473027 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-18 04:06:40.473038 | orchestrator | Wednesday 18 March 2026 04:05:11 +0000 (0:00:01.977) 0:01:07.034 ******* 2026-03-18 04:06:40.473049 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.473060 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.473070 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.473081 | orchestrator | 2026-03-18 04:06:40.473092 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-18 04:06:40.473103 | orchestrator | Wednesday 18 March 2026 04:05:14 +0000 (0:00:02.456) 0:01:09.491 ******* 2026-03-18 04:06:40.473113 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.473125 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.473154 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.473166 | orchestrator | 2026-03-18 04:06:40.473177 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-18 04:06:40.473188 | orchestrator | Wednesday 18 March 2026 04:05:15 +0000 (0:00:01.505) 0:01:10.996 ******* 2026-03-18 04:06:40.473209 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.473220 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.473231 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.473242 | orchestrator | 2026-03-18 04:06:40.473253 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-18 04:06:40.473264 | orchestrator | Wednesday 18 March 2026 04:05:17 +0000 (0:00:01.551) 0:01:12.548 ******* 2026-03-18 04:06:40.473274 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.473285 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:06:40.473296 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:06:40.473307 | orchestrator | 2026-03-18 04:06:40.473317 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-18 04:06:40.473328 | orchestrator | Wednesday 18 March 2026 04:05:19 +0000 (0:00:02.252) 0:01:14.801 ******* 2026-03-18 04:06:40.473339 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473350 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473361 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473400 | orchestrator | 2026-03-18 04:06:40.473411 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-18 04:06:40.473422 | orchestrator | Wednesday 18 March 2026 04:05:21 +0000 (0:00:02.009) 0:01:16.811 ******* 2026-03-18 04:06:40.473433 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473443 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473454 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473465 | orchestrator | 2026-03-18 04:06:40.473476 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-18 04:06:40.473487 | orchestrator | Wednesday 18 March 2026 04:05:23 +0000 (0:00:01.391) 0:01:18.202 ******* 2026-03-18 04:06:40.473498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 04:06:40.473511 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 04:06:40.473522 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-18 04:06:40.473533 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 04:06:40.473544 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 04:06:40.473554 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-18 04:06:40.473565 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473576 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473587 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473598 | orchestrator | 2026-03-18 04:06:40.473609 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-18 04:06:40.473619 | orchestrator | Wednesday 18 March 2026 04:05:46 +0000 (0:00:23.416) 0:01:41.619 ******* 2026-03-18 04:06:40.473630 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:06:40.473641 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:06:40.473652 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:06:40.473663 | orchestrator | 2026-03-18 04:06:40.473673 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-18 04:06:40.473684 | orchestrator | Wednesday 18 March 2026 04:05:47 +0000 (0:00:01.460) 0:01:43.079 ******* 2026-03-18 04:06:40.473695 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.473706 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:06:40.473717 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:06:40.473728 | orchestrator | 2026-03-18 04:06:40.473739 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-18 04:06:40.473757 | orchestrator | Wednesday 18 March 2026 04:05:49 +0000 (0:00:02.056) 0:01:45.135 ******* 2026-03-18 04:06:40.473769 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473779 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473790 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473801 | orchestrator | 2026-03-18 04:06:40.473812 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-18 04:06:40.473823 | orchestrator | Wednesday 18 March 2026 04:05:52 +0000 (0:00:02.308) 0:01:47.444 ******* 2026-03-18 04:06:40.473833 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.473844 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:06:40.473855 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:06:40.473866 | orchestrator | 2026-03-18 04:06:40.473877 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-18 04:06:40.473888 | orchestrator | Wednesday 18 March 2026 04:06:34 +0000 (0:00:42.705) 0:02:30.150 ******* 2026-03-18 04:06:40.473899 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473910 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473921 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473931 | orchestrator | 2026-03-18 04:06:40.473942 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-18 04:06:40.473953 | orchestrator | Wednesday 18 March 2026 04:06:36 +0000 (0:00:01.753) 0:02:31.903 ******* 2026-03-18 04:06:40.473964 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:06:40.473974 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:06:40.473985 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:06:40.473996 | orchestrator | 2026-03-18 04:06:40.474007 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-18 04:06:40.474079 | orchestrator | Wednesday 18 March 2026 04:06:38 +0000 (0:00:01.751) 0:02:33.655 ******* 2026-03-18 04:06:40.474094 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:06:40.474105 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:06:40.474124 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:06:40.474136 | orchestrator | 2026-03-18 04:06:40.474156 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-18 04:07:29.565814 | orchestrator | Wednesday 18 March 2026 04:06:40 +0000 (0:00:01.980) 0:02:35.636 ******* 2026-03-18 04:07:29.565944 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:07:29.565971 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:07:29.565989 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:07:29.566005 | orchestrator | 2026-03-18 04:07:29.566109 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-18 04:07:29.566130 | orchestrator | Wednesday 18 March 2026 04:06:42 +0000 (0:00:01.746) 0:02:37.382 ******* 2026-03-18 04:07:29.566149 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:07:29.566167 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:07:29.566184 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:07:29.566202 | orchestrator | 2026-03-18 04:07:29.566220 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-18 04:07:29.566238 | orchestrator | Wednesday 18 March 2026 04:06:43 +0000 (0:00:01.381) 0:02:38.764 ******* 2026-03-18 04:07:29.566254 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:07:29.566273 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:07:29.566291 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:07:29.566309 | orchestrator | 2026-03-18 04:07:29.566326 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-18 04:07:29.566344 | orchestrator | Wednesday 18 March 2026 04:06:45 +0000 (0:00:01.766) 0:02:40.530 ******* 2026-03-18 04:07:29.566362 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:07:29.566382 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:07:29.566400 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:07:29.566419 | orchestrator | 2026-03-18 04:07:29.566444 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-18 04:07:29.566461 | orchestrator | Wednesday 18 March 2026 04:06:47 +0000 (0:00:01.940) 0:02:42.471 ******* 2026-03-18 04:07:29.566486 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:07:29.566569 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:07:29.566590 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:07:29.566608 | orchestrator | 2026-03-18 04:07:29.566627 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-18 04:07:29.566665 | orchestrator | Wednesday 18 March 2026 04:06:49 +0000 (0:00:01.834) 0:02:44.305 ******* 2026-03-18 04:07:29.566685 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:07:29.566703 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:07:29.566721 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:07:29.566739 | orchestrator | 2026-03-18 04:07:29.566757 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-18 04:07:29.566775 | orchestrator | Wednesday 18 March 2026 04:06:51 +0000 (0:00:02.002) 0:02:46.308 ******* 2026-03-18 04:07:29.566793 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:07:29.566811 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:07:29.566828 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:07:29.566844 | orchestrator | 2026-03-18 04:07:29.566861 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-18 04:07:29.566878 | orchestrator | Wednesday 18 March 2026 04:06:52 +0000 (0:00:01.503) 0:02:47.811 ******* 2026-03-18 04:07:29.566894 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:07:29.566912 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:07:29.566930 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:07:29.566947 | orchestrator | 2026-03-18 04:07:29.566965 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-18 04:07:29.566981 | orchestrator | Wednesday 18 March 2026 04:06:54 +0000 (0:00:01.411) 0:02:49.223 ******* 2026-03-18 04:07:29.566998 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:07:29.567014 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:07:29.567032 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:07:29.567050 | orchestrator | 2026-03-18 04:07:29.567068 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-18 04:07:29.567086 | orchestrator | Wednesday 18 March 2026 04:06:55 +0000 (0:00:01.794) 0:02:51.018 ******* 2026-03-18 04:07:29.567104 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:07:29.567122 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:07:29.567140 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:07:29.567159 | orchestrator | 2026-03-18 04:07:29.567180 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-18 04:07:29.567202 | orchestrator | Wednesday 18 March 2026 04:06:57 +0000 (0:00:01.646) 0:02:52.665 ******* 2026-03-18 04:07:29.567221 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 04:07:29.567240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 04:07:29.567257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 04:07:29.567274 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 04:07:29.567293 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-18 04:07:29.567312 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 04:07:29.567332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 04:07:29.567351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-18 04:07:29.567371 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-18 04:07:29.567390 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 04:07:29.567408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-18 04:07:29.567445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-18 04:07:29.567493 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 04:07:29.567514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 04:07:29.567566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-18 04:07:29.567585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 04:07:29.567602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 04:07:29.567621 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-18 04:07:29.567638 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 04:07:29.567656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-18 04:07:29.567674 | orchestrator | 2026-03-18 04:07:29.567690 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-18 04:07:29.567707 | orchestrator | 2026-03-18 04:07:29.567725 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-18 04:07:29.567742 | orchestrator | Wednesday 18 March 2026 04:07:01 +0000 (0:00:04.456) 0:02:57.121 ******* 2026-03-18 04:07:29.567760 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.567777 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.567794 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.567810 | orchestrator | 2026-03-18 04:07:29.567828 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-18 04:07:29.567843 | orchestrator | Wednesday 18 March 2026 04:07:03 +0000 (0:00:01.369) 0:02:58.491 ******* 2026-03-18 04:07:29.567858 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.567874 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.567891 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.567907 | orchestrator | 2026-03-18 04:07:29.567925 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-18 04:07:29.567941 | orchestrator | Wednesday 18 March 2026 04:07:04 +0000 (0:00:01.675) 0:03:00.167 ******* 2026-03-18 04:07:29.567958 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.567977 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.567994 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.568011 | orchestrator | 2026-03-18 04:07:29.568027 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-18 04:07:29.568043 | orchestrator | Wednesday 18 March 2026 04:07:06 +0000 (0:00:01.651) 0:03:01.818 ******* 2026-03-18 04:07:29.568060 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:07:29.568076 | orchestrator | 2026-03-18 04:07:29.568093 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-18 04:07:29.568110 | orchestrator | Wednesday 18 March 2026 04:07:08 +0000 (0:00:01.715) 0:03:03.533 ******* 2026-03-18 04:07:29.568125 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:07:29.568141 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:07:29.568157 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:07:29.568174 | orchestrator | 2026-03-18 04:07:29.568190 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-18 04:07:29.568207 | orchestrator | Wednesday 18 March 2026 04:07:09 +0000 (0:00:01.382) 0:03:04.916 ******* 2026-03-18 04:07:29.568224 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:07:29.568241 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:07:29.568257 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:07:29.568274 | orchestrator | 2026-03-18 04:07:29.568292 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-18 04:07:29.568310 | orchestrator | Wednesday 18 March 2026 04:07:11 +0000 (0:00:01.587) 0:03:06.503 ******* 2026-03-18 04:07:29.568346 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:07:29.568363 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:07:29.568380 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:07:29.568397 | orchestrator | 2026-03-18 04:07:29.568414 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-18 04:07:29.568432 | orchestrator | Wednesday 18 March 2026 04:07:12 +0000 (0:00:01.425) 0:03:07.928 ******* 2026-03-18 04:07:29.568450 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.568466 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.568485 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.568503 | orchestrator | 2026-03-18 04:07:29.568520 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-18 04:07:29.568571 | orchestrator | Wednesday 18 March 2026 04:07:14 +0000 (0:00:01.768) 0:03:09.697 ******* 2026-03-18 04:07:29.568590 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.568607 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.568625 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.568643 | orchestrator | 2026-03-18 04:07:29.568660 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-18 04:07:29.568678 | orchestrator | Wednesday 18 March 2026 04:07:16 +0000 (0:00:02.231) 0:03:11.929 ******* 2026-03-18 04:07:29.568695 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:07:29.568712 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:07:29.568729 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:07:29.568745 | orchestrator | 2026-03-18 04:07:29.568782 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-18 04:07:29.568800 | orchestrator | Wednesday 18 March 2026 04:07:19 +0000 (0:00:02.315) 0:03:14.244 ******* 2026-03-18 04:07:29.568818 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:07:29.568835 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:07:29.568852 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:07:29.568869 | orchestrator | 2026-03-18 04:07:29.568885 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-18 04:07:29.568902 | orchestrator | 2026-03-18 04:07:29.568919 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-18 04:07:29.568939 | orchestrator | Wednesday 18 March 2026 04:07:27 +0000 (0:00:08.358) 0:03:22.603 ******* 2026-03-18 04:07:29.568957 | orchestrator | ok: [testbed-manager] 2026-03-18 04:07:29.568975 | orchestrator | 2026-03-18 04:07:29.568993 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-18 04:07:29.569034 | orchestrator | Wednesday 18 March 2026 04:07:29 +0000 (0:00:02.129) 0:03:24.733 ******* 2026-03-18 04:08:39.221503 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.221641 | orchestrator | 2026-03-18 04:08:39.221669 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-18 04:08:39.221692 | orchestrator | Wednesday 18 March 2026 04:07:30 +0000 (0:00:01.421) 0:03:26.154 ******* 2026-03-18 04:08:39.221705 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-18 04:08:39.221769 | orchestrator | 2026-03-18 04:08:39.221780 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-18 04:08:39.221792 | orchestrator | Wednesday 18 March 2026 04:07:32 +0000 (0:00:01.564) 0:03:27.719 ******* 2026-03-18 04:08:39.221803 | orchestrator | changed: [testbed-manager] 2026-03-18 04:08:39.221815 | orchestrator | 2026-03-18 04:08:39.221826 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-18 04:08:39.221837 | orchestrator | Wednesday 18 March 2026 04:07:34 +0000 (0:00:01.941) 0:03:29.661 ******* 2026-03-18 04:08:39.221847 | orchestrator | changed: [testbed-manager] 2026-03-18 04:08:39.221858 | orchestrator | 2026-03-18 04:08:39.221869 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-18 04:08:39.221880 | orchestrator | Wednesday 18 March 2026 04:07:36 +0000 (0:00:01.586) 0:03:31.247 ******* 2026-03-18 04:08:39.221891 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-18 04:08:39.221931 | orchestrator | 2026-03-18 04:08:39.221943 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-18 04:08:39.221954 | orchestrator | Wednesday 18 March 2026 04:07:38 +0000 (0:00:02.927) 0:03:34.174 ******* 2026-03-18 04:08:39.221965 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-18 04:08:39.221975 | orchestrator | 2026-03-18 04:08:39.221986 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-18 04:08:39.221997 | orchestrator | Wednesday 18 March 2026 04:07:40 +0000 (0:00:01.826) 0:03:36.001 ******* 2026-03-18 04:08:39.222082 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222098 | orchestrator | 2026-03-18 04:08:39.222111 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-18 04:08:39.222124 | orchestrator | Wednesday 18 March 2026 04:07:42 +0000 (0:00:01.417) 0:03:37.419 ******* 2026-03-18 04:08:39.222137 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222149 | orchestrator | 2026-03-18 04:08:39.222172 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-18 04:08:39.222185 | orchestrator | 2026-03-18 04:08:39.222198 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-18 04:08:39.222210 | orchestrator | Wednesday 18 March 2026 04:07:43 +0000 (0:00:01.601) 0:03:39.020 ******* 2026-03-18 04:08:39.222224 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222236 | orchestrator | 2026-03-18 04:08:39.222249 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-18 04:08:39.222261 | orchestrator | Wednesday 18 March 2026 04:07:44 +0000 (0:00:01.137) 0:03:40.158 ******* 2026-03-18 04:08:39.222274 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 04:08:39.222287 | orchestrator | 2026-03-18 04:08:39.222299 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-18 04:08:39.222311 | orchestrator | Wednesday 18 March 2026 04:07:46 +0000 (0:00:01.468) 0:03:41.626 ******* 2026-03-18 04:08:39.222323 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222335 | orchestrator | 2026-03-18 04:08:39.222347 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-18 04:08:39.222359 | orchestrator | Wednesday 18 March 2026 04:07:48 +0000 (0:00:01.869) 0:03:43.496 ******* 2026-03-18 04:08:39.222373 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222386 | orchestrator | 2026-03-18 04:08:39.222396 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-18 04:08:39.222407 | orchestrator | Wednesday 18 March 2026 04:07:51 +0000 (0:00:02.733) 0:03:46.230 ******* 2026-03-18 04:08:39.222418 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222428 | orchestrator | 2026-03-18 04:08:39.222439 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-18 04:08:39.222450 | orchestrator | Wednesday 18 March 2026 04:07:52 +0000 (0:00:01.436) 0:03:47.666 ******* 2026-03-18 04:08:39.222461 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222471 | orchestrator | 2026-03-18 04:08:39.222482 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-18 04:08:39.222493 | orchestrator | Wednesday 18 March 2026 04:07:53 +0000 (0:00:01.461) 0:03:49.128 ******* 2026-03-18 04:08:39.222503 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222514 | orchestrator | 2026-03-18 04:08:39.222525 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-18 04:08:39.222536 | orchestrator | Wednesday 18 March 2026 04:07:55 +0000 (0:00:01.668) 0:03:50.797 ******* 2026-03-18 04:08:39.222546 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222557 | orchestrator | 2026-03-18 04:08:39.222568 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-18 04:08:39.222579 | orchestrator | Wednesday 18 March 2026 04:07:58 +0000 (0:00:02.541) 0:03:53.339 ******* 2026-03-18 04:08:39.222590 | orchestrator | ok: [testbed-manager] 2026-03-18 04:08:39.222601 | orchestrator | 2026-03-18 04:08:39.222612 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-18 04:08:39.222633 | orchestrator | 2026-03-18 04:08:39.222643 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-18 04:08:39.222654 | orchestrator | Wednesday 18 March 2026 04:07:59 +0000 (0:00:01.663) 0:03:55.002 ******* 2026-03-18 04:08:39.222665 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:08:39.222676 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:08:39.222687 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:08:39.222697 | orchestrator | 2026-03-18 04:08:39.222708 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-18 04:08:39.222736 | orchestrator | Wednesday 18 March 2026 04:08:01 +0000 (0:00:01.544) 0:03:56.547 ******* 2026-03-18 04:08:39.222747 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:08:39.222758 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:08:39.222768 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:08:39.222779 | orchestrator | 2026-03-18 04:08:39.222808 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-18 04:08:39.222820 | orchestrator | Wednesday 18 March 2026 04:08:02 +0000 (0:00:01.534) 0:03:58.082 ******* 2026-03-18 04:08:39.222831 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:08:39.222842 | orchestrator | 2026-03-18 04:08:39.222852 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-18 04:08:39.222863 | orchestrator | Wednesday 18 March 2026 04:08:04 +0000 (0:00:01.686) 0:03:59.769 ******* 2026-03-18 04:08:39.222874 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.222885 | orchestrator | 2026-03-18 04:08:39.222896 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-18 04:08:39.222906 | orchestrator | Wednesday 18 March 2026 04:08:06 +0000 (0:00:01.819) 0:04:01.588 ******* 2026-03-18 04:08:39.222917 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.222928 | orchestrator | 2026-03-18 04:08:39.222939 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-18 04:08:39.222950 | orchestrator | Wednesday 18 March 2026 04:08:08 +0000 (0:00:01.946) 0:04:03.535 ******* 2026-03-18 04:08:39.222960 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:08:39.222971 | orchestrator | 2026-03-18 04:08:39.222982 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-18 04:08:39.222993 | orchestrator | Wednesday 18 March 2026 04:08:09 +0000 (0:00:01.134) 0:04:04.669 ******* 2026-03-18 04:08:39.223003 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223014 | orchestrator | 2026-03-18 04:08:39.223025 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-18 04:08:39.223036 | orchestrator | Wednesday 18 March 2026 04:08:11 +0000 (0:00:02.059) 0:04:06.728 ******* 2026-03-18 04:08:39.223046 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223057 | orchestrator | 2026-03-18 04:08:39.223068 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-18 04:08:39.223079 | orchestrator | Wednesday 18 March 2026 04:08:13 +0000 (0:00:02.149) 0:04:08.877 ******* 2026-03-18 04:08:39.223089 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223100 | orchestrator | 2026-03-18 04:08:39.223111 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-18 04:08:39.223122 | orchestrator | Wednesday 18 March 2026 04:08:14 +0000 (0:00:01.207) 0:04:10.085 ******* 2026-03-18 04:08:39.223132 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223143 | orchestrator | 2026-03-18 04:08:39.223154 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-18 04:08:39.223165 | orchestrator | Wednesday 18 March 2026 04:08:16 +0000 (0:00:01.242) 0:04:11.328 ******* 2026-03-18 04:08:39.223176 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-18 04:08:39.223187 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-18 04:08:39.223199 | orchestrator | } 2026-03-18 04:08:39.223218 | orchestrator | 2026-03-18 04:08:39.223229 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-18 04:08:39.223240 | orchestrator | Wednesday 18 March 2026 04:08:17 +0000 (0:00:01.140) 0:04:12.468 ******* 2026-03-18 04:08:39.223250 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:08:39.223261 | orchestrator | 2026-03-18 04:08:39.223272 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-18 04:08:39.223282 | orchestrator | Wednesday 18 March 2026 04:08:18 +0000 (0:00:01.166) 0:04:13.635 ******* 2026-03-18 04:08:39.223293 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-18 04:08:39.223304 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-18 04:08:39.223315 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-18 04:08:39.223326 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-18 04:08:39.223336 | orchestrator | 2026-03-18 04:08:39.223347 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-18 04:08:39.223358 | orchestrator | Wednesday 18 March 2026 04:08:24 +0000 (0:00:05.698) 0:04:19.334 ******* 2026-03-18 04:08:39.223368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223379 | orchestrator | 2026-03-18 04:08:39.223390 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-18 04:08:39.223400 | orchestrator | Wednesday 18 March 2026 04:08:26 +0000 (0:00:02.418) 0:04:21.752 ******* 2026-03-18 04:08:39.223411 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223422 | orchestrator | 2026-03-18 04:08:39.223433 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-18 04:08:39.223444 | orchestrator | Wednesday 18 March 2026 04:08:29 +0000 (0:00:02.747) 0:04:24.500 ******* 2026-03-18 04:08:39.223454 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-18 04:08:39.223465 | orchestrator | 2026-03-18 04:08:39.223481 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-18 04:08:39.223500 | orchestrator | Wednesday 18 March 2026 04:08:33 +0000 (0:00:04.318) 0:04:28.819 ******* 2026-03-18 04:08:39.223517 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:08:39.223534 | orchestrator | 2026-03-18 04:08:39.223552 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-18 04:08:39.223570 | orchestrator | Wednesday 18 March 2026 04:08:34 +0000 (0:00:01.185) 0:04:30.004 ******* 2026-03-18 04:08:39.223589 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-18 04:08:39.223620 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-18 04:08:39.223641 | orchestrator | 2026-03-18 04:08:39.223659 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-18 04:08:39.223677 | orchestrator | Wednesday 18 March 2026 04:08:37 +0000 (0:00:02.962) 0:04:32.966 ******* 2026-03-18 04:08:39.223694 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:08:39.223748 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:09:04.886118 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:09:04.886223 | orchestrator | 2026-03-18 04:09:04.886238 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-18 04:09:04.886249 | orchestrator | Wednesday 18 March 2026 04:08:39 +0000 (0:00:01.429) 0:04:34.395 ******* 2026-03-18 04:09:04.886258 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:09:04.886268 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:09:04.886277 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:09:04.886286 | orchestrator | 2026-03-18 04:09:04.886296 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-18 04:09:04.886304 | orchestrator | 2026-03-18 04:09:04.886313 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-18 04:09:04.886322 | orchestrator | Wednesday 18 March 2026 04:08:41 +0000 (0:00:02.113) 0:04:36.509 ******* 2026-03-18 04:09:04.886332 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:04.886364 | orchestrator | 2026-03-18 04:09:04.886374 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-18 04:09:04.886383 | orchestrator | Wednesday 18 March 2026 04:08:42 +0000 (0:00:01.171) 0:04:37.681 ******* 2026-03-18 04:09:04.886391 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-18 04:09:04.886401 | orchestrator | 2026-03-18 04:09:04.886409 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-18 04:09:04.886418 | orchestrator | Wednesday 18 March 2026 04:08:43 +0000 (0:00:01.484) 0:04:39.165 ******* 2026-03-18 04:09:04.886427 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:04.886435 | orchestrator | 2026-03-18 04:09:04.886444 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-18 04:09:04.886467 | orchestrator | 2026-03-18 04:09:04.886476 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-18 04:09:04.886507 | orchestrator | Wednesday 18 March 2026 04:08:48 +0000 (0:00:04.853) 0:04:44.018 ******* 2026-03-18 04:09:04.886517 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:09:04.886526 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:09:04.886535 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:09:04.886543 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:09:04.886552 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:09:04.886560 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:09:04.886570 | orchestrator | 2026-03-18 04:09:04.886580 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-18 04:09:04.886590 | orchestrator | Wednesday 18 March 2026 04:08:50 +0000 (0:00:01.918) 0:04:45.937 ******* 2026-03-18 04:09:04.886601 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 04:09:04.886611 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 04:09:04.886622 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-18 04:09:04.886632 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 04:09:04.886642 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 04:09:04.886652 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-18 04:09:04.886662 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 04:09:04.886673 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 04:09:04.886683 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-18 04:09:04.886694 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 04:09:04.886704 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 04:09:04.886715 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 04:09:04.886726 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-18 04:09:04.886736 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 04:09:04.886747 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 04:09:04.886757 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-18 04:09:04.886767 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 04:09:04.886841 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-18 04:09:04.886852 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 04:09:04.886861 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 04:09:04.886880 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-18 04:09:04.886890 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 04:09:04.886901 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 04:09:04.886912 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-18 04:09:04.886922 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 04:09:04.886931 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 04:09:04.886956 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-18 04:09:04.886965 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 04:09:04.886974 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 04:09:04.886982 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-18 04:09:04.886991 | orchestrator | 2026-03-18 04:09:04.886999 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-18 04:09:04.887008 | orchestrator | Wednesday 18 March 2026 04:09:00 +0000 (0:00:09.759) 0:04:55.697 ******* 2026-03-18 04:09:04.887017 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:09:04.887026 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:09:04.887034 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:09:04.887043 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:09:04.887052 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:09:04.887060 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:09:04.887069 | orchestrator | 2026-03-18 04:09:04.887078 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-18 04:09:04.887087 | orchestrator | Wednesday 18 March 2026 04:09:02 +0000 (0:00:01.856) 0:04:57.553 ******* 2026-03-18 04:09:04.887096 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:09:04.887105 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:09:04.887113 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:09:04.887122 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:09:04.887130 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:09:04.887139 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:09:04.887147 | orchestrator | 2026-03-18 04:09:04.887156 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:09:04.887170 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 04:09:04.887182 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 04:09:04.887191 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 04:09:04.887200 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-18 04:09:04.887208 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 04:09:04.887217 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 04:09:04.887225 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-18 04:09:04.887234 | orchestrator | 2026-03-18 04:09:04.887243 | orchestrator | 2026-03-18 04:09:04.887251 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:09:04.887266 | orchestrator | Wednesday 18 March 2026 04:09:04 +0000 (0:00:02.485) 0:05:00.039 ******* 2026-03-18 04:09:04.887274 | orchestrator | =============================================================================== 2026-03-18 04:09:04.887283 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 42.71s 2026-03-18 04:09:04.887292 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.42s 2026-03-18 04:09:04.887302 | orchestrator | Manage labels ----------------------------------------------------------- 9.76s 2026-03-18 04:09:04.887310 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.36s 2026-03-18 04:09:04.887319 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.70s 2026-03-18 04:09:04.887327 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.85s 2026-03-18 04:09:04.887336 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.65s 2026-03-18 04:09:04.887345 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.46s 2026-03-18 04:09:04.887353 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.32s 2026-03-18 04:09:04.887362 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.06s 2026-03-18 04:09:04.887371 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 3.06s 2026-03-18 04:09:04.887379 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.96s 2026-03-18 04:09:04.887388 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.93s 2026-03-18 04:09:04.887396 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.80s 2026-03-18 04:09:04.887405 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.79s 2026-03-18 04:09:04.887414 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.75s 2026-03-18 04:09:04.887422 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.73s 2026-03-18 04:09:04.887431 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.59s 2026-03-18 04:09:04.887446 | orchestrator | kubectl : Install required packages ------------------------------------- 2.54s 2026-03-18 04:09:05.360383 | orchestrator | Manage taints ----------------------------------------------------------- 2.49s 2026-03-18 04:09:05.682682 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-18 04:09:05.682849 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-18 04:09:05.690621 | orchestrator | + set -e 2026-03-18 04:09:05.690704 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 04:09:05.690725 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 04:09:05.690743 | orchestrator | ++ INTERACTIVE=false 2026-03-18 04:09:05.690759 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 04:09:05.690822 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 04:09:05.690836 | orchestrator | + osism apply openstackclient 2026-03-18 04:09:17.868509 | orchestrator | 2026-03-18 04:09:17 | INFO  | Task 4c48515d-b782-4be7-90af-3be03ba3408a (openstackclient) was prepared for execution. 2026-03-18 04:09:17.868627 | orchestrator | 2026-03-18 04:09:17 | INFO  | It takes a moment until task 4c48515d-b782-4be7-90af-3be03ba3408a (openstackclient) has been started and output is visible here. 2026-03-18 04:09:44.289409 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:09:44.289528 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:09:44.289556 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:09:44.289567 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:09:44.289615 | orchestrator | 2026-03-18 04:09:44.289644 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-18 04:09:44.289655 | orchestrator | 2026-03-18 04:09:44.289666 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-18 04:09:44.289677 | orchestrator | Wednesday 18 March 2026 04:09:24 +0000 (0:00:01.694) 0:00:01.694 ******* 2026-03-18 04:09:44.289690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-18 04:09:44.289702 | orchestrator | 2026-03-18 04:09:44.289713 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-18 04:09:44.289723 | orchestrator | Wednesday 18 March 2026 04:09:25 +0000 (0:00:00.842) 0:00:02.536 ******* 2026-03-18 04:09:44.289734 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-18 04:09:44.289745 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-18 04:09:44.289755 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-18 04:09:44.289766 | orchestrator | 2026-03-18 04:09:44.289777 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-18 04:09:44.289787 | orchestrator | Wednesday 18 March 2026 04:09:26 +0000 (0:00:01.426) 0:00:03.963 ******* 2026-03-18 04:09:44.289798 | orchestrator | changed: [testbed-manager] 2026-03-18 04:09:44.289809 | orchestrator | 2026-03-18 04:09:44.289820 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-18 04:09:44.289830 | orchestrator | Wednesday 18 March 2026 04:09:27 +0000 (0:00:01.291) 0:00:05.254 ******* 2026-03-18 04:09:44.289841 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:44.289852 | orchestrator | 2026-03-18 04:09:44.289924 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-18 04:09:44.289938 | orchestrator | Wednesday 18 March 2026 04:09:28 +0000 (0:00:01.100) 0:00:06.354 ******* 2026-03-18 04:09:44.289951 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:44.289965 | orchestrator | 2026-03-18 04:09:44.289978 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-18 04:09:44.289991 | orchestrator | Wednesday 18 March 2026 04:09:29 +0000 (0:00:00.928) 0:00:07.282 ******* 2026-03-18 04:09:44.290003 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:09:44.290072 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:09:44.290101 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:44.290113 | orchestrator | 2026-03-18 04:09:44.290125 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-18 04:09:44.290137 | orchestrator | Wednesday 18 March 2026 04:09:30 +0000 (0:00:00.743) 0:00:08.026 ******* 2026-03-18 04:09:44.290149 | orchestrator | changed: [testbed-manager] 2026-03-18 04:09:44.290162 | orchestrator | 2026-03-18 04:09:44.290175 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-18 04:09:44.290187 | orchestrator | Wednesday 18 March 2026 04:09:40 +0000 (0:00:10.264) 0:00:18.290 ******* 2026-03-18 04:09:44.290199 | orchestrator | changed: [testbed-manager] 2026-03-18 04:09:44.290212 | orchestrator | 2026-03-18 04:09:44.290222 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-18 04:09:44.290233 | orchestrator | Wednesday 18 March 2026 04:09:42 +0000 (0:00:01.329) 0:00:19.620 ******* 2026-03-18 04:09:44.290243 | orchestrator | changed: [testbed-manager] 2026-03-18 04:09:44.290254 | orchestrator | 2026-03-18 04:09:44.290265 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-18 04:09:44.290275 | orchestrator | Wednesday 18 March 2026 04:09:42 +0000 (0:00:00.627) 0:00:20.247 ******* 2026-03-18 04:09:44.290286 | orchestrator | ok: [testbed-manager] 2026-03-18 04:09:44.290296 | orchestrator | 2026-03-18 04:09:44.290316 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:09:44.290328 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-18 04:09:44.290339 | orchestrator | 2026-03-18 04:09:44.290350 | orchestrator | 2026-03-18 04:09:44.290360 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:09:44.290371 | orchestrator | Wednesday 18 March 2026 04:09:43 +0000 (0:00:01.136) 0:00:21.384 ******* 2026-03-18 04:09:44.290382 | orchestrator | =============================================================================== 2026-03-18 04:09:44.290392 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.26s 2026-03-18 04:09:44.290403 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.43s 2026-03-18 04:09:44.290413 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.33s 2026-03-18 04:09:44.290424 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.29s 2026-03-18 04:09:44.290434 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.14s 2026-03-18 04:09:44.290445 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.10s 2026-03-18 04:09:44.290473 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.93s 2026-03-18 04:09:44.290484 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.84s 2026-03-18 04:09:44.290495 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.74s 2026-03-18 04:09:44.290505 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.63s 2026-03-18 04:09:44.618314 | orchestrator | + osism apply -a upgrade common 2026-03-18 04:09:46.757144 | orchestrator | 2026-03-18 04:09:46 | INFO  | Task 5efb1b6b-69b5-4cfe-b5c1-fc978f3c9667 (common) was prepared for execution. 2026-03-18 04:09:46.757272 | orchestrator | 2026-03-18 04:09:46 | INFO  | It takes a moment until task 5efb1b6b-69b5-4cfe-b5c1-fc978f3c9667 (common) has been started and output is visible here. 2026-03-18 04:10:03.340280 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:10:03.340392 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:10:03.340420 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:10:03.340431 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:10:03.340453 | orchestrator | 2026-03-18 04:10:03.340465 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-18 04:10:03.340476 | orchestrator | 2026-03-18 04:10:03.340488 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 04:10:03.340499 | orchestrator | Wednesday 18 March 2026 04:09:53 +0000 (0:00:02.064) 0:00:02.064 ******* 2026-03-18 04:10:03.340510 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:10:03.340522 | orchestrator | 2026-03-18 04:10:03.340533 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-18 04:10:03.340544 | orchestrator | Wednesday 18 March 2026 04:09:55 +0000 (0:00:02.284) 0:00:04.348 ******* 2026-03-18 04:10:03.340555 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340566 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340576 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340587 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340634 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340664 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340684 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340702 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340720 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340738 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340755 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340773 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:10:03.340791 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340811 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340829 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340849 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340867 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:10:03.340885 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340896 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340938 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340950 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:10:03.340960 | orchestrator | 2026-03-18 04:10:03.340971 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 04:10:03.340982 | orchestrator | Wednesday 18 March 2026 04:09:58 +0000 (0:00:02.912) 0:00:07.261 ******* 2026-03-18 04:10:03.340993 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:10:03.341005 | orchestrator | 2026-03-18 04:10:03.341017 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-18 04:10:03.341028 | orchestrator | Wednesday 18 March 2026 04:10:00 +0000 (0:00:02.289) 0:00:09.551 ******* 2026-03-18 04:10:03.341044 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341115 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341139 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341151 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341162 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341351 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:03.341366 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:03.341387 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186146 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186247 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186257 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186278 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186299 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186307 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186326 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186338 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186345 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186357 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:05.186364 | orchestrator | 2026-03-18 04:10:05.186371 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-18 04:10:05.186378 | orchestrator | Wednesday 18 March 2026 04:10:04 +0000 (0:00:03.596) 0:00:13.147 ******* 2026-03-18 04:10:05.186390 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:05.186398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:05.186405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:05.186425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.101634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.101796 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.101828 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:06.101854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:06.101876 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:06.101895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:06.102159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102266 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:06.102316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:06.102337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:06.102375 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:06.102395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:06.102444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:06.102512 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:06.102546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.244728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.244811 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:08.244826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.244837 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:08.244847 | orchestrator | 2026-03-18 04:10:08.244858 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-18 04:10:08.244869 | orchestrator | Wednesday 18 March 2026 04:10:06 +0000 (0:00:01.805) 0:00:14.953 ******* 2026-03-18 04:10:08.244880 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:08.244944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:08.244957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.244986 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.244997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:08.245027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245038 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:08.245048 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245058 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:08.245068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:08.245089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245105 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:08.245115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:08.245125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:08.245151 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:15.872425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:15.872589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:15.872671 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:15.872693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872745 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:15.872788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:15.872807 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:15.872824 | orchestrator | 2026-03-18 04:10:15.872841 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-18 04:10:15.872860 | orchestrator | Wednesday 18 March 2026 04:10:08 +0000 (0:00:02.156) 0:00:17.110 ******* 2026-03-18 04:10:15.872876 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:15.872892 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:15.872909 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:15.872927 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:15.872980 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:15.872997 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:15.873013 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:15.873029 | orchestrator | 2026-03-18 04:10:15.873046 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-18 04:10:15.873062 | orchestrator | Wednesday 18 March 2026 04:10:09 +0000 (0:00:00.852) 0:00:17.963 ******* 2026-03-18 04:10:15.873078 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:15.873094 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:15.873110 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:15.873128 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:15.873144 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:15.873160 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:15.873176 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:15.873191 | orchestrator | 2026-03-18 04:10:15.873226 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-18 04:10:15.873243 | orchestrator | Wednesday 18 March 2026 04:10:09 +0000 (0:00:00.850) 0:00:18.813 ******* 2026-03-18 04:10:15.873259 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:15.873275 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:15.873291 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:15.873307 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:15.873324 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:15.873340 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:15.873356 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:15.873372 | orchestrator | 2026-03-18 04:10:15.873390 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-18 04:10:15.873407 | orchestrator | Wednesday 18 March 2026 04:10:10 +0000 (0:00:00.766) 0:00:19.580 ******* 2026-03-18 04:10:15.873423 | orchestrator | changed: [testbed-manager] 2026-03-18 04:10:15.873440 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:10:15.873457 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:10:15.873483 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:10:15.873500 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:10:15.873517 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:10:15.873532 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:10:15.873548 | orchestrator | 2026-03-18 04:10:15.873564 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-18 04:10:15.873581 | orchestrator | Wednesday 18 March 2026 04:10:12 +0000 (0:00:01.975) 0:00:21.555 ******* 2026-03-18 04:10:15.873601 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:15.873620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:15.873638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:15.873672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:16.885908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:16.886121 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:16.886170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:16.886194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886257 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:16.886365 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:29.908763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:29.908891 | orchestrator | 2026-03-18 04:10:29.908923 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-18 04:10:29.908944 | orchestrator | Wednesday 18 March 2026 04:10:16 +0000 (0:00:04.194) 0:00:25.749 ******* 2026-03-18 04:10:29.909031 | orchestrator | [WARNING]: Skipped 2026-03-18 04:10:29.909057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-18 04:10:29.909078 | orchestrator | to this access issue: 2026-03-18 04:10:29.909097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-18 04:10:29.909117 | orchestrator | directory 2026-03-18 04:10:29.909136 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:10:29.909156 | orchestrator | 2026-03-18 04:10:29.909176 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-18 04:10:29.909195 | orchestrator | Wednesday 18 March 2026 04:10:18 +0000 (0:00:01.363) 0:00:27.113 ******* 2026-03-18 04:10:29.909214 | orchestrator | [WARNING]: Skipped 2026-03-18 04:10:29.909232 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-18 04:10:29.909250 | orchestrator | to this access issue: 2026-03-18 04:10:29.909294 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-18 04:10:29.909309 | orchestrator | directory 2026-03-18 04:10:29.909322 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:10:29.909334 | orchestrator | 2026-03-18 04:10:29.909348 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-18 04:10:29.909360 | orchestrator | Wednesday 18 March 2026 04:10:19 +0000 (0:00:00.887) 0:00:28.001 ******* 2026-03-18 04:10:29.909373 | orchestrator | [WARNING]: Skipped 2026-03-18 04:10:29.909385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-18 04:10:29.909398 | orchestrator | to this access issue: 2026-03-18 04:10:29.909410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-18 04:10:29.909423 | orchestrator | directory 2026-03-18 04:10:29.909435 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:10:29.909447 | orchestrator | 2026-03-18 04:10:29.909460 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-18 04:10:29.909472 | orchestrator | Wednesday 18 March 2026 04:10:20 +0000 (0:00:00.878) 0:00:28.879 ******* 2026-03-18 04:10:29.909485 | orchestrator | [WARNING]: Skipped 2026-03-18 04:10:29.909497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-18 04:10:29.909510 | orchestrator | to this access issue: 2026-03-18 04:10:29.909522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-18 04:10:29.909534 | orchestrator | directory 2026-03-18 04:10:29.909546 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:10:29.909585 | orchestrator | 2026-03-18 04:10:29.909597 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-18 04:10:29.909610 | orchestrator | Wednesday 18 March 2026 04:10:20 +0000 (0:00:00.960) 0:00:29.839 ******* 2026-03-18 04:10:29.909627 | orchestrator | changed: [testbed-manager] 2026-03-18 04:10:29.909645 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:10:29.909660 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:10:29.909675 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:10:29.909692 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:10:29.909710 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:10:29.909729 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:10:29.909747 | orchestrator | 2026-03-18 04:10:29.909767 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-18 04:10:29.909779 | orchestrator | Wednesday 18 March 2026 04:10:23 +0000 (0:00:03.020) 0:00:32.860 ******* 2026-03-18 04:10:29.909790 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909802 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909813 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909823 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909834 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909844 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909855 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:10:29.909866 | orchestrator | 2026-03-18 04:10:29.909876 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-18 04:10:29.909887 | orchestrator | Wednesday 18 March 2026 04:10:26 +0000 (0:00:02.210) 0:00:35.070 ******* 2026-03-18 04:10:29.909898 | orchestrator | ok: [testbed-manager] 2026-03-18 04:10:29.909909 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:10:29.909920 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:10:29.909930 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:10:29.909941 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:10:29.909951 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:10:29.909985 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:10:29.909996 | orchestrator | 2026-03-18 04:10:29.910076 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-18 04:10:29.910092 | orchestrator | Wednesday 18 March 2026 04:10:27 +0000 (0:00:01.787) 0:00:36.858 ******* 2026-03-18 04:10:29.910107 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:29.910130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:29.910144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:29.910167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:29.910180 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:29.910194 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:29.910205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:29.910226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:34.220877 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:34.220962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:34.221019 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:34.221025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:34.221030 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:34.221036 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:34.221040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:34.221055 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:34.221059 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:34.221068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:34.221072 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:34.221076 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:34.221080 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:34.221084 | orchestrator | 2026-03-18 04:10:34.221089 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-18 04:10:34.221094 | orchestrator | Wednesday 18 March 2026 04:10:29 +0000 (0:00:01.912) 0:00:38.771 ******* 2026-03-18 04:10:34.221098 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221102 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221106 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221109 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221113 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221117 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221120 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:10:34.221124 | orchestrator | 2026-03-18 04:10:34.221128 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-18 04:10:34.221132 | orchestrator | Wednesday 18 March 2026 04:10:31 +0000 (0:00:02.101) 0:00:40.872 ******* 2026-03-18 04:10:34.221136 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:34.221140 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:34.221143 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:34.221147 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:34.221157 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:37.236180 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:37.236252 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:10:37.236259 | orchestrator | 2026-03-18 04:10:37.236264 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-18 04:10:37.236269 | orchestrator | Wednesday 18 March 2026 04:10:34 +0000 (0:00:02.205) 0:00:43.077 ******* 2026-03-18 04:10:37.236287 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236314 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236347 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:10:37.236363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236375 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236384 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:37.236395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:10:39.332642 | orchestrator | 2026-03-18 04:10:39.332649 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-18 04:10:39.332655 | orchestrator | Wednesday 18 March 2026 04:10:37 +0000 (0:00:03.524) 0:00:46.602 ******* 2026-03-18 04:10:39.332662 | orchestrator | changed: [testbed-manager] => { 2026-03-18 04:10:39.332669 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332675 | orchestrator | } 2026-03-18 04:10:39.332681 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:10:39.332686 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332692 | orchestrator | } 2026-03-18 04:10:39.332697 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:10:39.332702 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332708 | orchestrator | } 2026-03-18 04:10:39.332713 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:10:39.332718 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332724 | orchestrator | } 2026-03-18 04:10:39.332729 | orchestrator | changed: [testbed-node-3] => { 2026-03-18 04:10:39.332734 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332740 | orchestrator | } 2026-03-18 04:10:39.332745 | orchestrator | changed: [testbed-node-4] => { 2026-03-18 04:10:39.332751 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332756 | orchestrator | } 2026-03-18 04:10:39.332762 | orchestrator | changed: [testbed-node-5] => { 2026-03-18 04:10:39.332767 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:10:39.332773 | orchestrator | } 2026-03-18 04:10:39.332778 | orchestrator | 2026-03-18 04:10:39.332795 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:10:39.332801 | orchestrator | Wednesday 18 March 2026 04:10:38 +0000 (0:00:01.121) 0:00:47.723 ******* 2026-03-18 04:10:39.332808 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:39.332817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:39.332823 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:39.332829 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:10:39.332836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:39.332846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:39.332853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:39.332859 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:10:39.332865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:39.332875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998414 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:10:41.998433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:41.998447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998496 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:10:41.998507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:41.998519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998557 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:10:41.998570 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:10:41.998593 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:10:41.998627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:41.998640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998672 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:10:41.998683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:10:41.998695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:10:41.998717 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:10:41.998728 | orchestrator | 2026-03-18 04:10:41.998739 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998750 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:02.254) 0:00:49.978 ******* 2026-03-18 04:10:41.998761 | orchestrator | 2026-03-18 04:10:41.998772 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998782 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.083) 0:00:50.062 ******* 2026-03-18 04:10:41.998793 | orchestrator | 2026-03-18 04:10:41.998803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998814 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.105) 0:00:50.167 ******* 2026-03-18 04:10:41.998824 | orchestrator | 2026-03-18 04:10:41.998835 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998846 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.077) 0:00:50.245 ******* 2026-03-18 04:10:41.998859 | orchestrator | 2026-03-18 04:10:41.998871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998882 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.073) 0:00:50.319 ******* 2026-03-18 04:10:41.998894 | orchestrator | 2026-03-18 04:10:41.998906 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:41.998926 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.345) 0:00:50.664 ******* 2026-03-18 04:10:44.160376 | orchestrator | 2026-03-18 04:10:44.160459 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:10:44.160497 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.074) 0:00:50.739 ******* 2026-03-18 04:10:44.160503 | orchestrator | 2026-03-18 04:10:44.160518 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-18 04:10:44.160537 | orchestrator | Wednesday 18 March 2026 04:10:41 +0000 (0:00:00.107) 0:00:50.847 ******* 2026-03-18 04:10:44.160542 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-03-18 04:10:44.160547 | orchestrator | (): '9965e008-26bd-cb4b-0e00-00000000000f' 2026-03-18 04:10:44.160560 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_7vyft6zs/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_7vyft6zs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_7vyft6zs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:44.160581 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__ti3ih2l/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__ti3ih2l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__ti3ih2l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:44.160590 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_x5ch0let/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_x5ch0let/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_x5ch0let/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:44.160601 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_71tsbgcy/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_71tsbgcy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_71tsbgcy/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:45.925433 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_i3eqzrgt/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_i3eqzrgt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_i3eqzrgt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:45.925596 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_i8zfub67/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_i8zfub67/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_i8zfub67/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:45.925658 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_px9fxpm4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_px9fxpm4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_px9fxpm4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-18 04:10:45.925737 | orchestrator | 2026-03-18 04:10:45.925754 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:10:45.925767 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925780 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925791 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925802 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925813 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925824 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925835 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-18 04:10:45.925846 | orchestrator | 2026-03-18 04:10:45.925857 | orchestrator | 2026-03-18 04:10:45.925878 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:10:46.430611 | orchestrator | 2026-03-18 04:10:46 | INFO  | Task 17f89495-112c-499e-99b6-233a0625a763 (common) was prepared for execution. 2026-03-18 04:10:46.430732 | orchestrator | 2026-03-18 04:10:46 | INFO  | It takes a moment until task 17f89495-112c-499e-99b6-233a0625a763 (common) has been started and output is visible here. 2026-03-18 04:11:05.451563 | orchestrator | Wednesday 18 March 2026 04:10:45 +0000 (0:00:03.944) 0:00:54.792 ******* 2026-03-18 04:11:05.451681 | orchestrator | =============================================================================== 2026-03-18 04:11:05.451697 | orchestrator | common : Copying over config.json files for services -------------------- 4.19s 2026-03-18 04:11:05.451709 | orchestrator | common : Restart fluentd container -------------------------------------- 3.94s 2026-03-18 04:11:05.451720 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.60s 2026-03-18 04:11:05.451731 | orchestrator | service-check-containers : common | Check containers -------------------- 3.52s 2026-03-18 04:11:05.451742 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.02s 2026-03-18 04:11:05.451753 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.91s 2026-03-18 04:11:05.451764 | orchestrator | common : include_tasks -------------------------------------------------- 2.29s 2026-03-18 04:11:05.451775 | orchestrator | common : include_tasks -------------------------------------------------- 2.28s 2026-03-18 04:11:05.451786 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.25s 2026-03-18 04:11:05.451797 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.21s 2026-03-18 04:11:05.451807 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.21s 2026-03-18 04:11:05.451818 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.16s 2026-03-18 04:11:05.451829 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.10s 2026-03-18 04:11:05.451840 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.98s 2026-03-18 04:11:05.451876 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.91s 2026-03-18 04:11:05.451888 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.81s 2026-03-18 04:11:05.451899 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.79s 2026-03-18 04:11:05.451910 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.36s 2026-03-18 04:11:05.451921 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 1.12s 2026-03-18 04:11:05.451932 | orchestrator | common : Find custom fluentd output config files ------------------------ 0.96s 2026-03-18 04:11:05.451943 | orchestrator | 2026-03-18 04:11:05.451955 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-18 04:11:05.451966 | orchestrator | 2026-03-18 04:11:05.451978 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 04:11:05.451989 | orchestrator | Wednesday 18 March 2026 04:10:53 +0000 (0:00:02.155) 0:00:02.155 ******* 2026-03-18 04:11:05.452000 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:11:05.452012 | orchestrator | 2026-03-18 04:11:05.452023 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-18 04:11:05.452103 | orchestrator | Wednesday 18 March 2026 04:10:56 +0000 (0:00:03.393) 0:00:05.548 ******* 2026-03-18 04:11:05.452125 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452165 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452185 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452203 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452223 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452241 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452261 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-18 04:11:05.452279 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452298 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452318 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452339 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452361 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452381 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452401 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-18 04:11:05.452421 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452442 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452461 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452480 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452497 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452515 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452562 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-18 04:11:05.452584 | orchestrator | 2026-03-18 04:11:05.452604 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-18 04:11:05.452633 | orchestrator | Wednesday 18 March 2026 04:10:59 +0000 (0:00:03.274) 0:00:08.823 ******* 2026-03-18 04:11:05.452645 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:11:05.452657 | orchestrator | 2026-03-18 04:11:05.452668 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-18 04:11:05.452679 | orchestrator | Wednesday 18 March 2026 04:11:02 +0000 (0:00:02.965) 0:00:11.789 ******* 2026-03-18 04:11:05.452693 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452708 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452720 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452751 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452762 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:05.452782 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:07.670368 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670474 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670521 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670532 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670544 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670597 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670611 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670624 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670646 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670663 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670674 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670685 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:07.670704 | orchestrator | 2026-03-18 04:11:07.670717 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-18 04:11:07.670729 | orchestrator | Wednesday 18 March 2026 04:11:07 +0000 (0:00:04.414) 0:00:16.204 ******* 2026-03-18 04:11:07.670742 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:07.670765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:09.912108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:09.912324 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:11:09.912344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:09.912462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912499 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:11:09.912518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:09.912556 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:11:09.912576 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:11:09.912597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:09.912649 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:11:09.912669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:09.912746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:11.171758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.171859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.171889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.171903 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:11:11.171915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.171943 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:11:11.171953 | orchestrator | 2026-03-18 04:11:11.171963 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-18 04:11:11.171972 | orchestrator | Wednesday 18 March 2026 04:11:09 +0000 (0:00:02.782) 0:00:18.987 ******* 2026-03-18 04:11:11.171982 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:11.171992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:11.172017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:11.172036 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:11.172134 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:11:11.172143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:11.172169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184593 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:11:25.184611 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:11:25.184622 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:11:25.184635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:25.184685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:25.184711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184723 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:11:25.184735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184757 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:11:25.184786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:11:25.184799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:25.184836 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:11:25.184847 | orchestrator | 2026-03-18 04:11:25.184859 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-18 04:11:25.184871 | orchestrator | Wednesday 18 March 2026 04:11:13 +0000 (0:00:03.317) 0:00:22.304 ******* 2026-03-18 04:11:25.184882 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:11:25.184893 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:11:25.184904 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:11:25.184914 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:11:25.184925 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:11:25.184936 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:11:25.184947 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:11:25.184958 | orchestrator | 2026-03-18 04:11:25.184969 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-18 04:11:25.184980 | orchestrator | Wednesday 18 March 2026 04:11:15 +0000 (0:00:02.299) 0:00:24.604 ******* 2026-03-18 04:11:25.184992 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:11:25.185005 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:11:25.185018 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:11:25.185030 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:11:25.185042 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:11:25.185054 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:11:25.185067 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:11:25.185112 | orchestrator | 2026-03-18 04:11:25.185123 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-18 04:11:25.185134 | orchestrator | Wednesday 18 March 2026 04:11:17 +0000 (0:00:01.956) 0:00:26.560 ******* 2026-03-18 04:11:25.185145 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:11:25.185156 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:11:25.185167 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:11:25.185178 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:11:25.185188 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:11:25.185199 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:11:25.185209 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:11:25.185220 | orchestrator | 2026-03-18 04:11:25.185231 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-18 04:11:25.185242 | orchestrator | Wednesday 18 March 2026 04:11:19 +0000 (0:00:01.936) 0:00:28.497 ******* 2026-03-18 04:11:25.185252 | orchestrator | ok: [testbed-manager] 2026-03-18 04:11:25.185264 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:11:25.185276 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:11:25.185295 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:11:25.185313 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:11:25.185332 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:11:25.185351 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:11:25.185370 | orchestrator | 2026-03-18 04:11:25.185388 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-18 04:11:25.185403 | orchestrator | Wednesday 18 March 2026 04:11:22 +0000 (0:00:03.063) 0:00:31.560 ******* 2026-03-18 04:11:25.185415 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:25.185446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123592 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123741 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123762 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123774 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123798 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:27.123863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123893 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123905 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123917 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123936 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123948 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:27.123967 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.402593 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.402732 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.402757 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.402777 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.402796 | orchestrator | 2026-03-18 04:11:46.402815 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-18 04:11:46.402834 | orchestrator | Wednesday 18 March 2026 04:11:27 +0000 (0:00:04.643) 0:00:36.204 ******* 2026-03-18 04:11:46.402852 | orchestrator | [WARNING]: Skipped 2026-03-18 04:11:46.402870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-18 04:11:46.402889 | orchestrator | to this access issue: 2026-03-18 04:11:46.402907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-18 04:11:46.402956 | orchestrator | directory 2026-03-18 04:11:46.402975 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:11:46.402994 | orchestrator | 2026-03-18 04:11:46.403010 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-18 04:11:46.403027 | orchestrator | Wednesday 18 March 2026 04:11:29 +0000 (0:00:02.457) 0:00:38.661 ******* 2026-03-18 04:11:46.403044 | orchestrator | [WARNING]: Skipped 2026-03-18 04:11:46.403060 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-18 04:11:46.403076 | orchestrator | to this access issue: 2026-03-18 04:11:46.403093 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-18 04:11:46.403143 | orchestrator | directory 2026-03-18 04:11:46.403161 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:11:46.403176 | orchestrator | 2026-03-18 04:11:46.403194 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-18 04:11:46.403211 | orchestrator | Wednesday 18 March 2026 04:11:31 +0000 (0:00:01.955) 0:00:40.617 ******* 2026-03-18 04:11:46.403228 | orchestrator | [WARNING]: Skipped 2026-03-18 04:11:46.403246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-18 04:11:46.403265 | orchestrator | to this access issue: 2026-03-18 04:11:46.403304 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-18 04:11:46.403322 | orchestrator | directory 2026-03-18 04:11:46.403343 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:11:46.403363 | orchestrator | 2026-03-18 04:11:46.403384 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-18 04:11:46.403403 | orchestrator | Wednesday 18 March 2026 04:11:33 +0000 (0:00:01.853) 0:00:42.471 ******* 2026-03-18 04:11:46.403421 | orchestrator | [WARNING]: Skipped 2026-03-18 04:11:46.403439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-18 04:11:46.403457 | orchestrator | to this access issue: 2026-03-18 04:11:46.403474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-18 04:11:46.403490 | orchestrator | directory 2026-03-18 04:11:46.403506 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-18 04:11:46.403522 | orchestrator | 2026-03-18 04:11:46.403538 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-18 04:11:46.403555 | orchestrator | Wednesday 18 March 2026 04:11:35 +0000 (0:00:01.948) 0:00:44.419 ******* 2026-03-18 04:11:46.403572 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:11:46.403588 | orchestrator | ok: [testbed-manager] 2026-03-18 04:11:46.403605 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:11:46.403621 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:11:46.403638 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:11:46.403655 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:11:46.403671 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:11:46.403689 | orchestrator | 2026-03-18 04:11:46.403731 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-18 04:11:46.403750 | orchestrator | Wednesday 18 March 2026 04:11:39 +0000 (0:00:03.847) 0:00:48.267 ******* 2026-03-18 04:11:46.403767 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403787 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403804 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403822 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403839 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403856 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403903 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-18 04:11:46.403951 | orchestrator | 2026-03-18 04:11:46.403971 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-18 04:11:46.403988 | orchestrator | Wednesday 18 March 2026 04:11:42 +0000 (0:00:03.444) 0:00:51.711 ******* 2026-03-18 04:11:46.404006 | orchestrator | ok: [testbed-manager] 2026-03-18 04:11:46.404024 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:11:46.404041 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:11:46.404059 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:11:46.404076 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:11:46.404093 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:11:46.404162 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:11:46.404181 | orchestrator | 2026-03-18 04:11:46.404199 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-18 04:11:46.404217 | orchestrator | Wednesday 18 March 2026 04:11:45 +0000 (0:00:02.879) 0:00:54.591 ******* 2026-03-18 04:11:46.404237 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:46.404261 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:46.404282 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:46.404302 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:46.404338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214409 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:47.214508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:47.214521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:47.214529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214535 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:47.214560 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:47.214567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214571 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:47.214576 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:47.214580 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:47.214584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:11:47.214588 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:47.214592 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:47.214603 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:57.850950 | orchestrator | 2026-03-18 04:11:57.851066 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-18 04:11:57.851083 | orchestrator | Wednesday 18 March 2026 04:11:48 +0000 (0:00:02.888) 0:00:57.479 ******* 2026-03-18 04:11:57.851095 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851107 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851179 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851194 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851205 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851216 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851227 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-18 04:11:57.851238 | orchestrator | 2026-03-18 04:11:57.851249 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-18 04:11:57.851260 | orchestrator | Wednesday 18 March 2026 04:11:51 +0000 (0:00:02.957) 0:01:00.437 ******* 2026-03-18 04:11:57.851271 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851281 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851292 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851303 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851314 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851324 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851335 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-18 04:11:57.851345 | orchestrator | 2026-03-18 04:11:57.851356 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-18 04:11:57.851367 | orchestrator | Wednesday 18 March 2026 04:11:55 +0000 (0:00:04.109) 0:01:04.547 ******* 2026-03-18 04:11:57.851382 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851506 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851519 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:57.851534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-18 04:11:57.851547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:57.851567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:57.851581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:11:57.851606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.492947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493111 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493379 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:12:02.493477 | orchestrator | 2026-03-18 04:12:02.493504 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-18 04:12:02.493530 | orchestrator | Wednesday 18 March 2026 04:11:59 +0000 (0:00:04.380) 0:01:08.927 ******* 2026-03-18 04:12:02.493556 | orchestrator | changed: [testbed-manager] => { 2026-03-18 04:12:02.493583 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493609 | orchestrator | } 2026-03-18 04:12:02.493633 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:12:02.493657 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493678 | orchestrator | } 2026-03-18 04:12:02.493699 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:12:02.493721 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493742 | orchestrator | } 2026-03-18 04:12:02.493764 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:12:02.493784 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493803 | orchestrator | } 2026-03-18 04:12:02.493822 | orchestrator | changed: [testbed-node-3] => { 2026-03-18 04:12:02.493840 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493859 | orchestrator | } 2026-03-18 04:12:02.493878 | orchestrator | changed: [testbed-node-4] => { 2026-03-18 04:12:02.493899 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493935 | orchestrator | } 2026-03-18 04:12:02.493957 | orchestrator | changed: [testbed-node-5] => { 2026-03-18 04:12:02.493973 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:12:02.493985 | orchestrator | } 2026-03-18 04:12:02.493996 | orchestrator | 2026-03-18 04:12:02.494007 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:12:02.494079 | orchestrator | Wednesday 18 March 2026 04:12:02 +0000 (0:00:02.224) 0:01:11.151 ******* 2026-03-18 04:12:02.494094 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:02.494107 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:02.494119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:02.494133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:02.494183 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:12:02.494220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045783 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:12:09.045802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:09.045857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045884 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:12:09.045896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:09.045908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045931 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:12:09.045967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:09.045980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.045999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.046010 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:12:09.046084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:09.046096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.046108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:12:09.046119 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:12:09.046130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-18 04:12:09.046180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:13:44.174485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:13:44.174600 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:13:44.174628 | orchestrator | 2026-03-18 04:13:44.174650 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174671 | orchestrator | Wednesday 18 March 2026 04:12:05 +0000 (0:00:03.101) 0:01:14.253 ******* 2026-03-18 04:13:44.174690 | orchestrator | 2026-03-18 04:13:44.174704 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174715 | orchestrator | Wednesday 18 March 2026 04:12:05 +0000 (0:00:00.488) 0:01:14.741 ******* 2026-03-18 04:13:44.174726 | orchestrator | 2026-03-18 04:13:44.174737 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174747 | orchestrator | Wednesday 18 March 2026 04:12:06 +0000 (0:00:00.479) 0:01:15.221 ******* 2026-03-18 04:13:44.174758 | orchestrator | 2026-03-18 04:13:44.174769 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174779 | orchestrator | Wednesday 18 March 2026 04:12:06 +0000 (0:00:00.443) 0:01:15.664 ******* 2026-03-18 04:13:44.174790 | orchestrator | 2026-03-18 04:13:44.174801 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174881 | orchestrator | Wednesday 18 March 2026 04:12:07 +0000 (0:00:00.455) 0:01:16.120 ******* 2026-03-18 04:13:44.174894 | orchestrator | 2026-03-18 04:13:44.174905 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174916 | orchestrator | Wednesday 18 March 2026 04:12:07 +0000 (0:00:00.705) 0:01:16.826 ******* 2026-03-18 04:13:44.174927 | orchestrator | 2026-03-18 04:13:44.174937 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-18 04:13:44.174948 | orchestrator | Wednesday 18 March 2026 04:12:08 +0000 (0:00:00.476) 0:01:17.302 ******* 2026-03-18 04:13:44.174961 | orchestrator | 2026-03-18 04:13:44.174974 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-18 04:13:44.175024 | orchestrator | Wednesday 18 March 2026 04:12:09 +0000 (0:00:00.812) 0:01:18.115 ******* 2026-03-18 04:13:44.175038 | orchestrator | changed: [testbed-manager] 2026-03-18 04:13:44.175051 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:13:44.175064 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:13:44.175077 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:13:44.175089 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:13:44.175101 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:13:44.175114 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:13:44.175127 | orchestrator | 2026-03-18 04:13:44.175139 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-18 04:13:44.175151 | orchestrator | Wednesday 18 March 2026 04:12:47 +0000 (0:00:38.456) 0:01:56.572 ******* 2026-03-18 04:13:44.175164 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:13:44.175176 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:13:44.175188 | orchestrator | changed: [testbed-manager] 2026-03-18 04:13:44.175201 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:13:44.175213 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:13:44.175225 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:13:44.175237 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:13:44.175249 | orchestrator | 2026-03-18 04:13:44.175262 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-18 04:13:44.175275 | orchestrator | Wednesday 18 March 2026 04:13:28 +0000 (0:00:40.851) 0:02:37.423 ******* 2026-03-18 04:13:44.175287 | orchestrator | ok: [testbed-manager] 2026-03-18 04:13:44.175423 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:13:44.175439 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:13:44.175450 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:13:44.175461 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:13:44.175472 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:13:44.175483 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:13:44.175493 | orchestrator | 2026-03-18 04:13:44.175505 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-18 04:13:44.175515 | orchestrator | Wednesday 18 March 2026 04:13:31 +0000 (0:00:03.043) 0:02:40.467 ******* 2026-03-18 04:13:44.175526 | orchestrator | changed: [testbed-manager] 2026-03-18 04:13:44.175537 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:13:44.175548 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:13:44.175559 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:13:44.175570 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:13:44.175581 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:13:44.175592 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:13:44.175603 | orchestrator | 2026-03-18 04:13:44.175618 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:13:44.175638 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175651 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175678 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175689 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175721 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175774 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175786 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:13:44.175796 | orchestrator | 2026-03-18 04:13:44.175807 | orchestrator | 2026-03-18 04:13:44.175818 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:13:44.175829 | orchestrator | Wednesday 18 March 2026 04:13:43 +0000 (0:00:12.240) 0:02:52.707 ******* 2026-03-18 04:13:44.175840 | orchestrator | =============================================================================== 2026-03-18 04:13:44.175850 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.85s 2026-03-18 04:13:44.175861 | orchestrator | common : Restart fluentd container ------------------------------------- 38.46s 2026-03-18 04:13:44.175871 | orchestrator | common : Restart cron container ---------------------------------------- 12.24s 2026-03-18 04:13:44.175882 | orchestrator | common : Copying over config.json files for services -------------------- 4.64s 2026-03-18 04:13:44.175892 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.41s 2026-03-18 04:13:44.175903 | orchestrator | service-check-containers : common | Check containers -------------------- 4.38s 2026-03-18 04:13:44.175913 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 4.11s 2026-03-18 04:13:44.175924 | orchestrator | common : Flush handlers ------------------------------------------------- 3.86s 2026-03-18 04:13:44.175934 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.85s 2026-03-18 04:13:44.175945 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.44s 2026-03-18 04:13:44.175956 | orchestrator | common : include_tasks -------------------------------------------------- 3.39s 2026-03-18 04:13:44.175976 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.32s 2026-03-18 04:13:44.175987 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.28s 2026-03-18 04:13:44.175998 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.10s 2026-03-18 04:13:44.176008 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.06s 2026-03-18 04:13:44.176019 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.04s 2026-03-18 04:13:44.176029 | orchestrator | common : include_tasks -------------------------------------------------- 2.97s 2026-03-18 04:13:44.176040 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.96s 2026-03-18 04:13:44.176051 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.89s 2026-03-18 04:13:44.176061 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.88s 2026-03-18 04:13:44.519286 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-18 04:13:46.885011 | orchestrator | 2026-03-18 04:13:46 | INFO  | Task 134ba149-2edf-4c90-bdcd-b1c0dcd734cc (loadbalancer) was prepared for execution. 2026-03-18 04:13:46.885099 | orchestrator | 2026-03-18 04:13:46 | INFO  | It takes a moment until task 134ba149-2edf-4c90-bdcd-b1c0dcd734cc (loadbalancer) has been started and output is visible here. 2026-03-18 04:14:23.321825 | orchestrator | 2026-03-18 04:14:23.321944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:14:23.321963 | orchestrator | 2026-03-18 04:14:23.321975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:14:23.321987 | orchestrator | Wednesday 18 March 2026 04:13:53 +0000 (0:00:02.278) 0:00:02.278 ******* 2026-03-18 04:14:23.321999 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:23.322011 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:23.322077 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:23.322089 | orchestrator | 2026-03-18 04:14:23.322106 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:14:23.322126 | orchestrator | Wednesday 18 March 2026 04:13:55 +0000 (0:00:01.837) 0:00:04.118 ******* 2026-03-18 04:14:23.322146 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-18 04:14:23.322163 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-18 04:14:23.322175 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-18 04:14:23.322185 | orchestrator | 2026-03-18 04:14:23.322196 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-18 04:14:23.322207 | orchestrator | 2026-03-18 04:14:23.322218 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-18 04:14:23.322228 | orchestrator | Wednesday 18 March 2026 04:13:57 +0000 (0:00:02.158) 0:00:06.276 ******* 2026-03-18 04:14:23.322240 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:14:23.322251 | orchestrator | 2026-03-18 04:14:23.322267 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-18 04:14:23.322304 | orchestrator | Wednesday 18 March 2026 04:14:00 +0000 (0:00:02.526) 0:00:08.803 ******* 2026-03-18 04:14:23.322317 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:23.322328 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:23.322338 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:23.322349 | orchestrator | 2026-03-18 04:14:23.322361 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-18 04:14:23.322374 | orchestrator | Wednesday 18 March 2026 04:14:02 +0000 (0:00:02.225) 0:00:11.028 ******* 2026-03-18 04:14:23.322414 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:23.322427 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:23.322440 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:23.322451 | orchestrator | 2026-03-18 04:14:23.322464 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-18 04:14:23.322499 | orchestrator | Wednesday 18 March 2026 04:14:05 +0000 (0:00:02.448) 0:00:13.476 ******* 2026-03-18 04:14:23.322512 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:23.322524 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:23.322536 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:23.322549 | orchestrator | 2026-03-18 04:14:23.322562 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-18 04:14:23.322574 | orchestrator | Wednesday 18 March 2026 04:14:07 +0000 (0:00:02.106) 0:00:15.583 ******* 2026-03-18 04:14:23.322587 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:14:23.322600 | orchestrator | 2026-03-18 04:14:23.322612 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-18 04:14:23.322624 | orchestrator | Wednesday 18 March 2026 04:14:09 +0000 (0:00:01.965) 0:00:17.548 ******* 2026-03-18 04:14:23.322636 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:23.322648 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:23.322660 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:23.322671 | orchestrator | 2026-03-18 04:14:23.322682 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-18 04:14:23.322693 | orchestrator | Wednesday 18 March 2026 04:14:11 +0000 (0:00:01.879) 0:00:19.427 ******* 2026-03-18 04:14:23.322703 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322714 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322725 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322735 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322746 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322756 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-18 04:14:23.322767 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 04:14:23.322779 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 04:14:23.322790 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-18 04:14:23.322800 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 04:14:23.322811 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 04:14:23.322822 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-18 04:14:23.322832 | orchestrator | 2026-03-18 04:14:23.322843 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-18 04:14:23.322853 | orchestrator | Wednesday 18 March 2026 04:14:14 +0000 (0:00:03.198) 0:00:22.626 ******* 2026-03-18 04:14:23.322864 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-18 04:14:23.322875 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-18 04:14:23.322886 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-18 04:14:23.322896 | orchestrator | 2026-03-18 04:14:23.322907 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-18 04:14:23.322935 | orchestrator | Wednesday 18 March 2026 04:14:16 +0000 (0:00:02.089) 0:00:24.716 ******* 2026-03-18 04:14:23.322947 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-18 04:14:23.322958 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-18 04:14:23.322969 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-18 04:14:23.322979 | orchestrator | 2026-03-18 04:14:23.322990 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-18 04:14:23.323001 | orchestrator | Wednesday 18 March 2026 04:14:18 +0000 (0:00:02.269) 0:00:26.985 ******* 2026-03-18 04:14:23.323021 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-18 04:14:23.323031 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:14:23.323042 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-18 04:14:23.323053 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:14:23.323064 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-18 04:14:23.323074 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:14:23.323085 | orchestrator | 2026-03-18 04:14:23.323095 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-18 04:14:23.323106 | orchestrator | Wednesday 18 March 2026 04:14:20 +0000 (0:00:01.965) 0:00:28.951 ******* 2026-03-18 04:14:23.323126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:23.323144 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:23.323156 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:23.323168 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:23.323179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:23.323205 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:34.436447 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:34.436584 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:34.436603 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:34.436616 | orchestrator | 2026-03-18 04:14:34.436629 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-18 04:14:34.436641 | orchestrator | Wednesday 18 March 2026 04:14:23 +0000 (0:00:02.750) 0:00:31.701 ******* 2026-03-18 04:14:34.436652 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:34.436664 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:34.436675 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:34.436686 | orchestrator | 2026-03-18 04:14:34.436697 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-18 04:14:34.436708 | orchestrator | Wednesday 18 March 2026 04:14:25 +0000 (0:00:01.979) 0:00:33.681 ******* 2026-03-18 04:14:34.436719 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-18 04:14:34.436732 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-18 04:14:34.436743 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-18 04:14:34.436753 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-18 04:14:34.436764 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-18 04:14:34.436775 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-18 04:14:34.436786 | orchestrator | 2026-03-18 04:14:34.436797 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-18 04:14:34.436807 | orchestrator | Wednesday 18 March 2026 04:14:28 +0000 (0:00:02.915) 0:00:36.597 ******* 2026-03-18 04:14:34.436818 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:34.436829 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:34.436840 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:34.436851 | orchestrator | 2026-03-18 04:14:34.436862 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-18 04:14:34.436892 | orchestrator | Wednesday 18 March 2026 04:14:30 +0000 (0:00:02.302) 0:00:38.900 ******* 2026-03-18 04:14:34.436904 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:14:34.436914 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:14:34.436928 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:14:34.436940 | orchestrator | 2026-03-18 04:14:34.436952 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-18 04:14:34.436964 | orchestrator | Wednesday 18 March 2026 04:14:32 +0000 (0:00:02.196) 0:00:41.096 ******* 2026-03-18 04:14:34.436977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 04:14:34.437010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:14:34.437030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:34.437045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:34.437059 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:14:34.437072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 04:14:34.437085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:14:34.437106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:34.437118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:34.437133 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:14:34.437153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 04:14:38.552725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:14:38.552836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:38.552854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:38.552890 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:14:38.552905 | orchestrator | 2026-03-18 04:14:38.552918 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-18 04:14:38.552930 | orchestrator | Wednesday 18 March 2026 04:14:34 +0000 (0:00:01.718) 0:00:42.815 ******* 2026-03-18 04:14:38.552941 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:38.552953 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:38.552965 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:38.552995 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:38.553015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:38.553035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:38.553046 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:38.553057 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:38.553069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:38.553093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:52.325001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:14:52.325125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41', '__omit_place_holder__5ee736485f10d90755392b8f998fea694da11c41'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-18 04:14:52.325169 | orchestrator | 2026-03-18 04:14:52.325183 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-18 04:14:52.325197 | orchestrator | Wednesday 18 March 2026 04:14:38 +0000 (0:00:04.119) 0:00:46.935 ******* 2026-03-18 04:14:52.325209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:14:52.325325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:52.325337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:52.325348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:14:52.325360 | orchestrator | 2026-03-18 04:14:52.325372 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-18 04:14:52.325384 | orchestrator | Wednesday 18 March 2026 04:14:43 +0000 (0:00:04.754) 0:00:51.690 ******* 2026-03-18 04:14:52.325398 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 04:14:52.325418 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 04:14:52.325509 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-18 04:14:52.325531 | orchestrator | 2026-03-18 04:14:52.325550 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-18 04:14:52.325567 | orchestrator | Wednesday 18 March 2026 04:14:45 +0000 (0:00:02.702) 0:00:54.393 ******* 2026-03-18 04:14:52.325583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 04:14:52.325599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 04:14:52.325624 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-18 04:14:52.325644 | orchestrator | 2026-03-18 04:14:52.325663 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-18 04:14:52.325681 | orchestrator | Wednesday 18 March 2026 04:14:50 +0000 (0:00:04.390) 0:00:58.783 ******* 2026-03-18 04:14:52.325713 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:14:52.325734 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:14:52.325767 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:12.926391 | orchestrator | 2026-03-18 04:15:12.926583 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-18 04:15:12.926603 | orchestrator | Wednesday 18 March 2026 04:14:52 +0000 (0:00:01.924) 0:01:00.708 ******* 2026-03-18 04:15:12.926616 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 04:15:12.926627 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 04:15:12.926638 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-18 04:15:12.926649 | orchestrator | 2026-03-18 04:15:12.926661 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-18 04:15:12.926672 | orchestrator | Wednesday 18 March 2026 04:14:55 +0000 (0:00:03.198) 0:01:03.907 ******* 2026-03-18 04:15:12.926682 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 04:15:12.926695 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 04:15:12.926705 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-18 04:15:12.926716 | orchestrator | 2026-03-18 04:15:12.926727 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-18 04:15:12.926737 | orchestrator | Wednesday 18 March 2026 04:14:58 +0000 (0:00:02.878) 0:01:06.785 ******* 2026-03-18 04:15:12.926748 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:15:12.926759 | orchestrator | 2026-03-18 04:15:12.926769 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-18 04:15:12.926780 | orchestrator | Wednesday 18 March 2026 04:15:00 +0000 (0:00:01.958) 0:01:08.744 ******* 2026-03-18 04:15:12.926791 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-18 04:15:12.926802 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-18 04:15:12.926813 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-18 04:15:12.926824 | orchestrator | 2026-03-18 04:15:12.926834 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-18 04:15:12.926846 | orchestrator | Wednesday 18 March 2026 04:15:03 +0000 (0:00:02.710) 0:01:11.454 ******* 2026-03-18 04:15:12.926857 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-18 04:15:12.926868 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-18 04:15:12.926879 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-18 04:15:12.926890 | orchestrator | 2026-03-18 04:15:12.926900 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-18 04:15:12.926911 | orchestrator | Wednesday 18 March 2026 04:15:05 +0000 (0:00:02.674) 0:01:14.129 ******* 2026-03-18 04:15:12.926924 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:12.926937 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:12.926949 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:12.926962 | orchestrator | 2026-03-18 04:15:12.926974 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-18 04:15:12.926986 | orchestrator | Wednesday 18 March 2026 04:15:07 +0000 (0:00:01.366) 0:01:15.496 ******* 2026-03-18 04:15:12.926999 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:12.927011 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:12.927024 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:12.927036 | orchestrator | 2026-03-18 04:15:12.927048 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-18 04:15:12.927085 | orchestrator | Wednesday 18 March 2026 04:15:08 +0000 (0:00:01.801) 0:01:17.297 ******* 2026-03-18 04:15:12.927103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927136 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927169 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927181 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927193 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:12.927224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:12.927236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:12.927258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:16.917905 | orchestrator | 2026-03-18 04:15:16.918003 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-18 04:15:16.918070 | orchestrator | Wednesday 18 March 2026 04:15:12 +0000 (0:00:04.009) 0:01:21.307 ******* 2026-03-18 04:15:16.918085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 04:15:16.918097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:16.918107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:16.918117 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:16.918128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 04:15:16.918159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:16.918182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:16.918192 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:16.918216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 04:15:16.918226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:16.918236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:16.918245 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:16.918255 | orchestrator | 2026-03-18 04:15:16.918264 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-18 04:15:16.918280 | orchestrator | Wednesday 18 March 2026 04:15:14 +0000 (0:00:01.715) 0:01:23.022 ******* 2026-03-18 04:15:16.918289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 04:15:16.918299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:16.918309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:16.918322 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:16.918339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 04:15:28.892756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:28.892892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:28.892922 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:28.893033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 04:15:28.893058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:28.893079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:28.893097 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:28.893119 | orchestrator | 2026-03-18 04:15:28.893138 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-18 04:15:28.893160 | orchestrator | Wednesday 18 March 2026 04:15:16 +0000 (0:00:02.280) 0:01:25.302 ******* 2026-03-18 04:15:28.893179 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 04:15:28.893220 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 04:15:28.893239 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-18 04:15:28.893259 | orchestrator | 2026-03-18 04:15:28.893280 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-18 04:15:28.893300 | orchestrator | Wednesday 18 March 2026 04:15:19 +0000 (0:00:02.560) 0:01:27.863 ******* 2026-03-18 04:15:28.893322 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 04:15:28.893342 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 04:15:28.893364 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-18 04:15:28.893385 | orchestrator | 2026-03-18 04:15:28.893433 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-18 04:15:28.893456 | orchestrator | Wednesday 18 March 2026 04:15:22 +0000 (0:00:02.552) 0:01:30.416 ******* 2026-03-18 04:15:28.893479 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 04:15:28.893528 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 04:15:28.893549 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 04:15:28.893568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:28.893587 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-18 04:15:28.893623 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 04:15:28.893644 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:28.893664 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-18 04:15:28.893682 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:28.893702 | orchestrator | 2026-03-18 04:15:28.893721 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-18 04:15:28.893741 | orchestrator | Wednesday 18 March 2026 04:15:24 +0000 (0:00:02.744) 0:01:33.161 ******* 2026-03-18 04:15:28.893761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:28.893782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:28.893802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:15:28.893821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:28.893844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:32.707476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:15:32.707648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:32.707669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:32.707682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:15:32.707695 | orchestrator | 2026-03-18 04:15:32.707708 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-18 04:15:32.707721 | orchestrator | Wednesday 18 March 2026 04:15:28 +0000 (0:00:04.104) 0:01:37.265 ******* 2026-03-18 04:15:32.707733 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:15:32.707745 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:15:32.707756 | orchestrator | } 2026-03-18 04:15:32.707768 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:15:32.707779 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:15:32.707790 | orchestrator | } 2026-03-18 04:15:32.707801 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:15:32.707811 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:15:32.707822 | orchestrator | } 2026-03-18 04:15:32.707833 | orchestrator | 2026-03-18 04:15:32.707845 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:15:32.707856 | orchestrator | Wednesday 18 March 2026 04:15:30 +0000 (0:00:01.551) 0:01:38.817 ******* 2026-03-18 04:15:32.707867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 04:15:32.707941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:32.707957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:32.707968 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:32.707980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 04:15:32.707992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:32.708005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:32.708017 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:32.708035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 04:15:32.708057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:15:32.708078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:15:38.242646 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:38.242773 | orchestrator | 2026-03-18 04:15:38.242797 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-18 04:15:38.242813 | orchestrator | Wednesday 18 March 2026 04:15:32 +0000 (0:00:02.266) 0:01:41.084 ******* 2026-03-18 04:15:38.242826 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:15:38.242838 | orchestrator | 2026-03-18 04:15:38.242850 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-18 04:15:38.242863 | orchestrator | Wednesday 18 March 2026 04:15:34 +0000 (0:00:01.993) 0:01:43.077 ******* 2026-03-18 04:15:38.242882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:38.242901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:38.242915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:38.242974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:38.243010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:38.243026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:38.243039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:38.243054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:38.243084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:38.243099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:38.243121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951216 | orchestrator | 2026-03-18 04:15:39.951240 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-18 04:15:39.951255 | orchestrator | Wednesday 18 March 2026 04:15:39 +0000 (0:00:04.660) 0:01:47.737 ******* 2026-03-18 04:15:39.951267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:39.951281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:39.951327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951347 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:39.951374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:39.951384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:39.951394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:39.951418 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:39.951431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:39.951441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-18 04:15:39.951456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.966368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.966491 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:54.966509 | orchestrator | 2026-03-18 04:15:54.966523 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-18 04:15:54.966589 | orchestrator | Wednesday 18 March 2026 04:15:41 +0000 (0:00:01.773) 0:01:49.511 ******* 2026-03-18 04:15:54.966602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966658 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:54.966669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966691 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:54.966702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:15:54.966741 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:15:54.966752 | orchestrator | 2026-03-18 04:15:54.966764 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-18 04:15:54.966775 | orchestrator | Wednesday 18 March 2026 04:15:43 +0000 (0:00:02.214) 0:01:51.726 ******* 2026-03-18 04:15:54.966786 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:15:54.966797 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:15:54.966808 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:15:54.966818 | orchestrator | 2026-03-18 04:15:54.966829 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-18 04:15:54.966840 | orchestrator | Wednesday 18 March 2026 04:15:45 +0000 (0:00:02.300) 0:01:54.026 ******* 2026-03-18 04:15:54.966850 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:15:54.966861 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:15:54.966871 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:15:54.966882 | orchestrator | 2026-03-18 04:15:54.966893 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-18 04:15:54.966906 | orchestrator | Wednesday 18 March 2026 04:15:48 +0000 (0:00:02.812) 0:01:56.839 ******* 2026-03-18 04:15:54.966918 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:15:54.966930 | orchestrator | 2026-03-18 04:15:54.966942 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-18 04:15:54.966954 | orchestrator | Wednesday 18 March 2026 04:15:50 +0000 (0:00:01.724) 0:01:58.563 ******* 2026-03-18 04:15:54.966989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:54.967014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.967029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.967049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:54.967063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.967076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:15:54.967099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:15:56.779762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.779889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.779908 | orchestrator | 2026-03-18 04:15:56.779922 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-18 04:15:56.779935 | orchestrator | Wednesday 18 March 2026 04:15:54 +0000 (0:00:04.784) 0:02:03.348 ******* 2026-03-18 04:15:56.779950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:56.779964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.779998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.780010 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:15:56.780042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:56.780061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.780073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.780084 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:15:56.780127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:15:56.780148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-18 04:15:56.780168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:13.207431 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:13.207636 | orchestrator | 2026-03-18 04:16:13.207660 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-18 04:16:13.207673 | orchestrator | Wednesday 18 March 2026 04:15:56 +0000 (0:00:01.816) 0:02:05.165 ******* 2026-03-18 04:16:13.207685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207712 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:13.207741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207766 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:13.207777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:13.207824 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:13.207836 | orchestrator | 2026-03-18 04:16:13.207847 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-18 04:16:13.207858 | orchestrator | Wednesday 18 March 2026 04:15:58 +0000 (0:00:01.886) 0:02:07.051 ******* 2026-03-18 04:16:13.207869 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:16:13.207880 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:16:13.207891 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:16:13.207901 | orchestrator | 2026-03-18 04:16:13.207912 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-18 04:16:13.207923 | orchestrator | Wednesday 18 March 2026 04:16:00 +0000 (0:00:02.326) 0:02:09.378 ******* 2026-03-18 04:16:13.207934 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:16:13.207944 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:16:13.207955 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:16:13.207967 | orchestrator | 2026-03-18 04:16:13.207980 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-18 04:16:13.207992 | orchestrator | Wednesday 18 March 2026 04:16:03 +0000 (0:00:02.842) 0:02:12.220 ******* 2026-03-18 04:16:13.208005 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:13.208017 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:13.208029 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:13.208042 | orchestrator | 2026-03-18 04:16:13.208054 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-18 04:16:13.208067 | orchestrator | Wednesday 18 March 2026 04:16:05 +0000 (0:00:01.428) 0:02:13.649 ******* 2026-03-18 04:16:13.208080 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:16:13.208093 | orchestrator | 2026-03-18 04:16:13.208105 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-18 04:16:13.208117 | orchestrator | Wednesday 18 March 2026 04:16:06 +0000 (0:00:01.734) 0:02:15.383 ******* 2026-03-18 04:16:13.208131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 04:16:13.208167 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 04:16:13.208183 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-18 04:16:13.208204 | orchestrator | 2026-03-18 04:16:13.208215 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-18 04:16:13.208227 | orchestrator | Wednesday 18 March 2026 04:16:10 +0000 (0:00:03.608) 0:02:18.992 ******* 2026-03-18 04:16:13.208245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 04:16:13.208258 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:13.208269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 04:16:13.208280 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:13.208299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-18 04:16:25.796773 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:25.796889 | orchestrator | 2026-03-18 04:16:25.796908 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-18 04:16:25.796921 | orchestrator | Wednesday 18 March 2026 04:16:13 +0000 (0:00:02.596) 0:02:21.589 ******* 2026-03-18 04:16:25.796935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.796990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.797004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:25.797016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.797027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.797038 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:25.797050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.797061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-18 04:16:25.797072 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:25.797083 | orchestrator | 2026-03-18 04:16:25.797094 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-18 04:16:25.797105 | orchestrator | Wednesday 18 March 2026 04:16:16 +0000 (0:00:03.061) 0:02:24.651 ******* 2026-03-18 04:16:25.797115 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:25.797126 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:25.797137 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:25.797147 | orchestrator | 2026-03-18 04:16:25.797158 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-18 04:16:25.797169 | orchestrator | Wednesday 18 March 2026 04:16:17 +0000 (0:00:01.465) 0:02:26.116 ******* 2026-03-18 04:16:25.797179 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:25.797190 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:25.797201 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:25.797212 | orchestrator | 2026-03-18 04:16:25.797222 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-18 04:16:25.797235 | orchestrator | Wednesday 18 March 2026 04:16:20 +0000 (0:00:02.492) 0:02:28.608 ******* 2026-03-18 04:16:25.797247 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:16:25.797260 | orchestrator | 2026-03-18 04:16:25.797272 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-18 04:16:25.797284 | orchestrator | Wednesday 18 March 2026 04:16:21 +0000 (0:00:01.785) 0:02:30.394 ******* 2026-03-18 04:16:25.797335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:25.797354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:25.797369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:25.797383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:25.797396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:25.797425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:27.819517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819676 | orchestrator | 2026-03-18 04:16:27.819689 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-18 04:16:27.819702 | orchestrator | Wednesday 18 March 2026 04:16:26 +0000 (0:00:04.949) 0:02:35.344 ******* 2026-03-18 04:16:27.819715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:16:27.819731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:27.819779 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:27.819809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:16:39.164743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.164862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.164879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.164922 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:39.164939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:16:39.164966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.164998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.165011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-18 04:16:39.165022 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:39.165034 | orchestrator | 2026-03-18 04:16:39.165046 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-18 04:16:39.165058 | orchestrator | Wednesday 18 March 2026 04:16:28 +0000 (0:00:02.011) 0:02:37.355 ******* 2026-03-18 04:16:39.165070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165104 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:39.165116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165138 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:39.165149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:16:39.165170 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:39.165181 | orchestrator | 2026-03-18 04:16:39.165193 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-18 04:16:39.165204 | orchestrator | Wednesday 18 March 2026 04:16:30 +0000 (0:00:02.016) 0:02:39.371 ******* 2026-03-18 04:16:39.165215 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:16:39.165226 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:16:39.165238 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:16:39.165251 | orchestrator | 2026-03-18 04:16:39.165263 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-18 04:16:39.165276 | orchestrator | Wednesday 18 March 2026 04:16:33 +0000 (0:00:02.258) 0:02:41.629 ******* 2026-03-18 04:16:39.165293 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:16:39.165306 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:16:39.165318 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:16:39.165329 | orchestrator | 2026-03-18 04:16:39.165342 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-18 04:16:39.165354 | orchestrator | Wednesday 18 March 2026 04:16:36 +0000 (0:00:02.883) 0:02:44.513 ******* 2026-03-18 04:16:39.165366 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:39.165378 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:39.165391 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:39.165403 | orchestrator | 2026-03-18 04:16:39.165415 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-18 04:16:39.165427 | orchestrator | Wednesday 18 March 2026 04:16:37 +0000 (0:00:01.629) 0:02:46.143 ******* 2026-03-18 04:16:39.165440 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:39.165453 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:39.165471 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:16:44.601242 | orchestrator | 2026-03-18 04:16:44.601365 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-18 04:16:44.601387 | orchestrator | Wednesday 18 March 2026 04:16:39 +0000 (0:00:01.405) 0:02:47.548 ******* 2026-03-18 04:16:44.601402 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:16:44.601416 | orchestrator | 2026-03-18 04:16:44.601429 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-18 04:16:44.601443 | orchestrator | Wednesday 18 March 2026 04:16:40 +0000 (0:00:01.790) 0:02:49.339 ******* 2026-03-18 04:16:44.601488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:44.601509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:16:44.601525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:44.601886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:16:44.601903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:16:44.601955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:16:46.452955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:16:46.453081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:16:46.453228 | orchestrator | 2026-03-18 04:16:46.453242 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-18 04:16:46.453254 | orchestrator | Wednesday 18 March 2026 04:16:45 +0000 (0:00:04.884) 0:02:54.224 ******* 2026-03-18 04:16:46.453272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:16:46.453296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:16:46.453317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678760 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:16:47.678798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:16:47.678830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:16:47.678844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.678867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.679585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.679658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:16:47.679672 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:16:47.679696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:17:02.921537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-18 04:17:02.921670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-18 04:17:02.921681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-18 04:17:02.921709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-18 04:17:02.921730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:17:02.921738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-18 04:17:02.921746 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:02.921756 | orchestrator | 2026-03-18 04:17:02.921764 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-18 04:17:02.921772 | orchestrator | Wednesday 18 March 2026 04:16:47 +0000 (0:00:01.842) 0:02:56.066 ******* 2026-03-18 04:17:02.921793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921813 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:02.921821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921836 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:02.921843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:02.921858 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:02.921870 | orchestrator | 2026-03-18 04:17:02.921877 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-18 04:17:02.921885 | orchestrator | Wednesday 18 March 2026 04:16:49 +0000 (0:00:02.041) 0:02:58.108 ******* 2026-03-18 04:17:02.921892 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:02.921901 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:02.921908 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:02.921915 | orchestrator | 2026-03-18 04:17:02.921923 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-18 04:17:02.921930 | orchestrator | Wednesday 18 March 2026 04:16:51 +0000 (0:00:02.182) 0:03:00.291 ******* 2026-03-18 04:17:02.921937 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:02.921944 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:02.921951 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:02.921959 | orchestrator | 2026-03-18 04:17:02.921966 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-18 04:17:02.921973 | orchestrator | Wednesday 18 March 2026 04:16:54 +0000 (0:00:02.903) 0:03:03.195 ******* 2026-03-18 04:17:02.921980 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:02.921988 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:02.921995 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:02.922002 | orchestrator | 2026-03-18 04:17:02.922009 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-18 04:17:02.922060 | orchestrator | Wednesday 18 March 2026 04:16:56 +0000 (0:00:01.474) 0:03:04.669 ******* 2026-03-18 04:17:02.922068 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:17:02.922075 | orchestrator | 2026-03-18 04:17:02.922086 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-18 04:17:02.922093 | orchestrator | Wednesday 18 March 2026 04:16:58 +0000 (0:00:01.886) 0:03:06.556 ******* 2026-03-18 04:17:02.922115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 04:17:04.130170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:04.130342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 04:17:04.130387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:04.130414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-18 04:17:04.130437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:07.733892 | orchestrator | 2026-03-18 04:17:07.733999 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-18 04:17:07.734075 | orchestrator | Wednesday 18 March 2026 04:17:04 +0000 (0:00:05.965) 0:03:12.521 ******* 2026-03-18 04:17:07.734114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 04:17:07.734132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:07.734195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 04:17:07.734209 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:07.734223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:07.734243 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:07.734269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-18 04:17:26.770350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-18 04:17:26.770496 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:26.770514 | orchestrator | 2026-03-18 04:17:26.770526 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-18 04:17:26.770539 | orchestrator | Wednesday 18 March 2026 04:17:08 +0000 (0:00:04.748) 0:03:17.270 ******* 2026-03-18 04:17:26.770552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770578 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:26.770590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770646 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:26.770719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-18 04:17:26.770755 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:26.770766 | orchestrator | 2026-03-18 04:17:26.770777 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-18 04:17:26.770789 | orchestrator | Wednesday 18 March 2026 04:17:13 +0000 (0:00:04.656) 0:03:21.926 ******* 2026-03-18 04:17:26.770800 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:26.770811 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:26.770822 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:26.770833 | orchestrator | 2026-03-18 04:17:26.770845 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-18 04:17:26.770856 | orchestrator | Wednesday 18 March 2026 04:17:15 +0000 (0:00:02.395) 0:03:24.322 ******* 2026-03-18 04:17:26.770867 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:26.770878 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:26.770891 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:26.770903 | orchestrator | 2026-03-18 04:17:26.770916 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-18 04:17:26.770928 | orchestrator | Wednesday 18 March 2026 04:17:18 +0000 (0:00:03.010) 0:03:27.333 ******* 2026-03-18 04:17:26.770941 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:26.770953 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:26.770966 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:26.770978 | orchestrator | 2026-03-18 04:17:26.770991 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-18 04:17:26.771003 | orchestrator | Wednesday 18 March 2026 04:17:20 +0000 (0:00:01.480) 0:03:28.814 ******* 2026-03-18 04:17:26.771015 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:17:26.771028 | orchestrator | 2026-03-18 04:17:26.771041 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-18 04:17:26.771054 | orchestrator | Wednesday 18 March 2026 04:17:22 +0000 (0:00:01.700) 0:03:30.514 ******* 2026-03-18 04:17:26.771067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:17:26.771096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:17:43.195127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:17:43.195294 | orchestrator | 2026-03-18 04:17:43.195318 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-18 04:17:43.195331 | orchestrator | Wednesday 18 March 2026 04:17:26 +0000 (0:00:04.638) 0:03:35.153 ******* 2026-03-18 04:17:43.195344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:17:43.195357 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:43.195369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:17:43.195381 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:43.195392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:17:43.195403 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:43.195414 | orchestrator | 2026-03-18 04:17:43.195425 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-18 04:17:43.195436 | orchestrator | Wednesday 18 March 2026 04:17:28 +0000 (0:00:01.722) 0:03:36.875 ******* 2026-03-18 04:17:43.195462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195497 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:43.195533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195557 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:43.195568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:17:43.195590 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:43.195601 | orchestrator | 2026-03-18 04:17:43.195612 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-18 04:17:43.195623 | orchestrator | Wednesday 18 March 2026 04:17:29 +0000 (0:00:01.458) 0:03:38.334 ******* 2026-03-18 04:17:43.195634 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:43.195646 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:43.195656 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:43.195667 | orchestrator | 2026-03-18 04:17:43.195678 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-18 04:17:43.195728 | orchestrator | Wednesday 18 March 2026 04:17:32 +0000 (0:00:02.215) 0:03:40.550 ******* 2026-03-18 04:17:43.195740 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:43.195751 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:43.195761 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:43.195772 | orchestrator | 2026-03-18 04:17:43.195782 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-18 04:17:43.195793 | orchestrator | Wednesday 18 March 2026 04:17:34 +0000 (0:00:02.820) 0:03:43.370 ******* 2026-03-18 04:17:43.195804 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:43.195815 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:43.195826 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:43.195837 | orchestrator | 2026-03-18 04:17:43.195847 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-18 04:17:43.195858 | orchestrator | Wednesday 18 March 2026 04:17:36 +0000 (0:00:01.385) 0:03:44.755 ******* 2026-03-18 04:17:43.195869 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:17:43.195880 | orchestrator | 2026-03-18 04:17:43.195890 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-18 04:17:43.195901 | orchestrator | Wednesday 18 March 2026 04:17:38 +0000 (0:00:01.976) 0:03:46.732 ******* 2026-03-18 04:17:43.195933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 04:17:44.949382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 04:17:44.949513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-18 04:17:44.949550 | orchestrator | 2026-03-18 04:17:44.949562 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-18 04:17:44.949572 | orchestrator | Wednesday 18 March 2026 04:17:43 +0000 (0:00:04.849) 0:03:51.582 ******* 2026-03-18 04:17:44.949583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 04:17:44.949599 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:44.949625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 04:17:54.051289 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:54.051404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-18 04:17:54.051444 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:54.051456 | orchestrator | 2026-03-18 04:17:54.051466 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-18 04:17:54.051476 | orchestrator | Wednesday 18 March 2026 04:17:44 +0000 (0:00:01.760) 0:03:53.342 ******* 2026-03-18 04:17:54.051565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 04:17:54.051627 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:54.051652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 04:17:54.051755 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:54.051764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-18 04:17:54.051797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-18 04:17:54.051806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-18 04:17:54.051814 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:54.051824 | orchestrator | 2026-03-18 04:17:54.051835 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-18 04:17:54.051846 | orchestrator | Wednesday 18 March 2026 04:17:47 +0000 (0:00:02.151) 0:03:55.494 ******* 2026-03-18 04:17:54.051856 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:54.051867 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:54.051877 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:54.051887 | orchestrator | 2026-03-18 04:17:54.051897 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-18 04:17:54.051907 | orchestrator | Wednesday 18 March 2026 04:17:49 +0000 (0:00:02.310) 0:03:57.804 ******* 2026-03-18 04:17:54.051917 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:17:54.051927 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:17:54.051937 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:17:54.051947 | orchestrator | 2026-03-18 04:17:54.051956 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-18 04:17:54.051967 | orchestrator | Wednesday 18 March 2026 04:17:52 +0000 (0:00:02.895) 0:04:00.700 ******* 2026-03-18 04:17:54.051976 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:17:54.051987 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:17:54.051997 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:17:54.052007 | orchestrator | 2026-03-18 04:17:54.052017 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-18 04:17:54.052027 | orchestrator | Wednesday 18 March 2026 04:17:53 +0000 (0:00:01.468) 0:04:02.169 ******* 2026-03-18 04:17:54.052043 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:03.928289 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:03.928420 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:03.928435 | orchestrator | 2026-03-18 04:18:03.928448 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-18 04:18:03.928460 | orchestrator | Wednesday 18 March 2026 04:17:55 +0000 (0:00:01.487) 0:04:03.657 ******* 2026-03-18 04:18:03.928471 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:18:03.928482 | orchestrator | 2026-03-18 04:18:03.928493 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-18 04:18:03.928504 | orchestrator | Wednesday 18 March 2026 04:17:57 +0000 (0:00:02.044) 0:04:05.701 ******* 2026-03-18 04:18:03.928523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-18 04:18:03.928548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:03.928586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-18 04:18:03.928602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:03.928642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:03.928655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:03.928667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-18 04:18:03.928685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:03.928696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:03.928707 | orchestrator | 2026-03-18 04:18:03.928785 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-18 04:18:03.928805 | orchestrator | Wednesday 18 March 2026 04:18:01 +0000 (0:00:04.676) 0:04:10.377 ******* 2026-03-18 04:18:03.928826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-18 04:18:05.769703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:05.769838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:05.769851 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:05.769876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-18 04:18:05.769884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:05.769908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:05.769915 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:05.769939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-18 04:18:05.769946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-18 04:18:05.769957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-18 04:18:05.769963 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:05.769969 | orchestrator | 2026-03-18 04:18:05.769976 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-18 04:18:05.769984 | orchestrator | Wednesday 18 March 2026 04:18:03 +0000 (0:00:01.937) 0:04:12.315 ******* 2026-03-18 04:18:05.769991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770012 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:05.770063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770078 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:05.770085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-18 04:18:05.770098 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:05.770105 | orchestrator | 2026-03-18 04:18:05.770112 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-18 04:18:05.770124 | orchestrator | Wednesday 18 March 2026 04:18:05 +0000 (0:00:01.837) 0:04:14.152 ******* 2026-03-18 04:18:21.604627 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:21.604811 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:21.604846 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:21.604871 | orchestrator | 2026-03-18 04:18:21.604885 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-18 04:18:21.604897 | orchestrator | Wednesday 18 March 2026 04:18:08 +0000 (0:00:02.272) 0:04:16.425 ******* 2026-03-18 04:18:21.604908 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:21.604919 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:21.604929 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:21.604940 | orchestrator | 2026-03-18 04:18:21.604951 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-18 04:18:21.604962 | orchestrator | Wednesday 18 March 2026 04:18:11 +0000 (0:00:03.313) 0:04:19.739 ******* 2026-03-18 04:18:21.604973 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:21.604985 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:21.604997 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:21.605008 | orchestrator | 2026-03-18 04:18:21.605019 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-18 04:18:21.605030 | orchestrator | Wednesday 18 March 2026 04:18:12 +0000 (0:00:01.383) 0:04:21.122 ******* 2026-03-18 04:18:21.605041 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:18:21.605052 | orchestrator | 2026-03-18 04:18:21.605062 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-18 04:18:21.605073 | orchestrator | Wednesday 18 March 2026 04:18:14 +0000 (0:00:01.828) 0:04:22.951 ******* 2026-03-18 04:18:21.605106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:21.605146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:21.605162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:21.605196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:21.605210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:21.605236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:21.605249 | orchestrator | 2026-03-18 04:18:21.605262 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-18 04:18:21.605275 | orchestrator | Wednesday 18 March 2026 04:18:19 +0000 (0:00:05.152) 0:04:28.104 ******* 2026-03-18 04:18:21.605289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:21.605310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:34.654869 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:34.655020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:34.655083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:34.655098 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:34.655111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:34.655123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:18:34.655134 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:34.655146 | orchestrator | 2026-03-18 04:18:34.655158 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-18 04:18:34.655170 | orchestrator | Wednesday 18 March 2026 04:18:21 +0000 (0:00:01.886) 0:04:29.990 ******* 2026-03-18 04:18:34.655197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655225 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:34.655236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655267 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:34.655277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:34.655299 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:34.655310 | orchestrator | 2026-03-18 04:18:34.655326 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-18 04:18:34.655337 | orchestrator | Wednesday 18 March 2026 04:18:23 +0000 (0:00:02.012) 0:04:32.003 ******* 2026-03-18 04:18:34.655348 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:34.655359 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:34.655372 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:34.655384 | orchestrator | 2026-03-18 04:18:34.655395 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-18 04:18:34.655408 | orchestrator | Wednesday 18 March 2026 04:18:25 +0000 (0:00:02.349) 0:04:34.352 ******* 2026-03-18 04:18:34.655420 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:34.655432 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:34.655444 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:34.655456 | orchestrator | 2026-03-18 04:18:34.655468 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-18 04:18:34.655488 | orchestrator | Wednesday 18 March 2026 04:18:28 +0000 (0:00:02.880) 0:04:37.233 ******* 2026-03-18 04:18:34.655508 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:18:34.655529 | orchestrator | 2026-03-18 04:18:34.655548 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-18 04:18:34.655567 | orchestrator | Wednesday 18 March 2026 04:18:31 +0000 (0:00:02.190) 0:04:39.424 ******* 2026-03-18 04:18:34.655588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:34.655611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:34.655656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:36.743708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:18:36.743729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:36.743939 | orchestrator | 2026-03-18 04:18:36.743952 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-18 04:18:36.743965 | orchestrator | Wednesday 18 March 2026 04:18:36 +0000 (0:00:05.085) 0:04:44.509 ******* 2026-03-18 04:18:36.743978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:36.744008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892599 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:39.892625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:39.892647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892804 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:39.892834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:18:39.892855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-18 04:18:39.892929 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:39.892948 | orchestrator | 2026-03-18 04:18:39.892968 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-18 04:18:39.892989 | orchestrator | Wednesday 18 March 2026 04:18:37 +0000 (0:00:01.815) 0:04:46.325 ******* 2026-03-18 04:18:39.893009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:39.893032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:39.893052 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:39.893070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:39.893094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:55.649692 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:55.649847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:55.649866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:18:55.649876 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:55.649883 | orchestrator | 2026-03-18 04:18:55.649891 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-18 04:18:55.649915 | orchestrator | Wednesday 18 March 2026 04:18:39 +0000 (0:00:01.952) 0:04:48.277 ******* 2026-03-18 04:18:55.649922 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:55.649929 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:55.649936 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:55.649942 | orchestrator | 2026-03-18 04:18:55.649949 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-18 04:18:55.649956 | orchestrator | Wednesday 18 March 2026 04:18:42 +0000 (0:00:02.277) 0:04:50.555 ******* 2026-03-18 04:18:55.649963 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:18:55.649970 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:18:55.649977 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:18:55.649984 | orchestrator | 2026-03-18 04:18:55.649990 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-18 04:18:55.649997 | orchestrator | Wednesday 18 March 2026 04:18:45 +0000 (0:00:02.989) 0:04:53.545 ******* 2026-03-18 04:18:55.650004 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:18:55.650010 | orchestrator | 2026-03-18 04:18:55.650065 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-18 04:18:55.650094 | orchestrator | Wednesday 18 March 2026 04:18:47 +0000 (0:00:02.634) 0:04:56.180 ******* 2026-03-18 04:18:55.650100 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:18:55.650108 | orchestrator | 2026-03-18 04:18:55.650115 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-18 04:18:55.650123 | orchestrator | Wednesday 18 March 2026 04:18:51 +0000 (0:00:03.985) 0:05:00.166 ******* 2026-03-18 04:18:55.650135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:55.650162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:18:55.650170 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:55.650178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:55.650191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:18:55.650198 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:55.650213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:59.346345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:18:59.346473 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:18:59.346491 | orchestrator | 2026-03-18 04:18:59.346504 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-18 04:18:59.346516 | orchestrator | Wednesday 18 March 2026 04:18:55 +0000 (0:00:03.863) 0:05:04.030 ******* 2026-03-18 04:18:59.346531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:59.346546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:18:59.346559 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:18:59.346600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:59.346622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:18:59.346634 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:18:59.346646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:18:59.346667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-18 04:19:15.447129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:15.447271 | orchestrator | 2026-03-18 04:19:15.447289 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-18 04:19:15.447315 | orchestrator | Wednesday 18 March 2026 04:18:59 +0000 (0:00:03.698) 0:05:07.728 ******* 2026-03-18 04:19:15.447329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447360 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:15.447372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447395 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:15.447406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-18 04:19:15.447436 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:15.447448 | orchestrator | 2026-03-18 04:19:15.447459 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-18 04:19:15.447470 | orchestrator | Wednesday 18 March 2026 04:19:03 +0000 (0:00:04.022) 0:05:11.751 ******* 2026-03-18 04:19:15.447481 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:19:15.447507 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:19:15.447519 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:19:15.447530 | orchestrator | 2026-03-18 04:19:15.447540 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-18 04:19:15.447557 | orchestrator | Wednesday 18 March 2026 04:19:06 +0000 (0:00:02.973) 0:05:14.724 ******* 2026-03-18 04:19:15.447568 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:15.447579 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:15.447590 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:15.447601 | orchestrator | 2026-03-18 04:19:15.447612 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-18 04:19:15.447623 | orchestrator | Wednesday 18 March 2026 04:19:09 +0000 (0:00:02.733) 0:05:17.457 ******* 2026-03-18 04:19:15.447634 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:15.447645 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:15.447656 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:15.447667 | orchestrator | 2026-03-18 04:19:15.447680 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-18 04:19:15.447693 | orchestrator | Wednesday 18 March 2026 04:19:10 +0000 (0:00:01.518) 0:05:18.976 ******* 2026-03-18 04:19:15.447706 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:19:15.447719 | orchestrator | 2026-03-18 04:19:15.447731 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-18 04:19:15.447743 | orchestrator | Wednesday 18 March 2026 04:19:12 +0000 (0:00:02.212) 0:05:21.188 ******* 2026-03-18 04:19:15.447756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:19:15.447771 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:19:15.447784 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:19:15.447827 | orchestrator | 2026-03-18 04:19:15.447842 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-18 04:19:15.447854 | orchestrator | Wednesday 18 March 2026 04:19:15 +0000 (0:00:02.511) 0:05:23.699 ******* 2026-03-18 04:19:15.447874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:19:31.723021 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:31.723101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:19:31.723110 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:31.723114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:19:31.723118 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:31.723122 | orchestrator | 2026-03-18 04:19:31.723127 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-18 04:19:31.723132 | orchestrator | Wednesday 18 March 2026 04:19:17 +0000 (0:00:01.993) 0:05:25.693 ******* 2026-03-18 04:19:31.723137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 04:19:31.723142 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:31.723146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 04:19:31.723164 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:31.723168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-18 04:19:31.723172 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:31.723176 | orchestrator | 2026-03-18 04:19:31.723180 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-18 04:19:31.723184 | orchestrator | Wednesday 18 March 2026 04:19:19 +0000 (0:00:01.721) 0:05:27.415 ******* 2026-03-18 04:19:31.723187 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:31.723191 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:31.723195 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:31.723199 | orchestrator | 2026-03-18 04:19:31.723202 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-18 04:19:31.723206 | orchestrator | Wednesday 18 March 2026 04:19:20 +0000 (0:00:01.548) 0:05:28.964 ******* 2026-03-18 04:19:31.723210 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:31.723213 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:31.723217 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:31.723221 | orchestrator | 2026-03-18 04:19:31.723224 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-18 04:19:31.723228 | orchestrator | Wednesday 18 March 2026 04:19:23 +0000 (0:00:02.595) 0:05:31.559 ******* 2026-03-18 04:19:31.723232 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:31.723236 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:31.723239 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:31.723243 | orchestrator | 2026-03-18 04:19:31.723247 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-18 04:19:31.723251 | orchestrator | Wednesday 18 March 2026 04:19:24 +0000 (0:00:01.833) 0:05:33.392 ******* 2026-03-18 04:19:31.723255 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:19:31.723259 | orchestrator | 2026-03-18 04:19:31.723262 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-18 04:19:31.723266 | orchestrator | Wednesday 18 March 2026 04:19:27 +0000 (0:00:02.129) 0:05:35.522 ******* 2026-03-18 04:19:31.723286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:19:31.723293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.723301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:31.723306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:31.723317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.826990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:31.827108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:31.827148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:31.827162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:31.827175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.827203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:31.827234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:31.827247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.827267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:31.827281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:19:31.827293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:31.827318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:19:31.942763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.942921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.942936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:31.942961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:31.942989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:31.943008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:31.943019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.943032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:31.943047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:31.943060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:31.943078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:32.047163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:32.047274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:32.047292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:32.047305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:32.047341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:32.047395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:32.047409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:32.047422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:32.047436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:32.047448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:32.047466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:32.047478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:32.047506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.252884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:34.252993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:34.253029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.253043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.253081 | orchestrator | 2026-03-18 04:19:34.253096 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-18 04:19:34.253109 | orchestrator | Wednesday 18 March 2026 04:19:33 +0000 (0:00:06.008) 0:05:41.531 ******* 2026-03-18 04:19:34.253140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:19:34.253154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.253167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:34.253185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:34.253205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.253225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.356442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.356545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:34.356562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.356592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.356630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:34.356645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.356674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.356689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:19:34.356708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:34.356730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.356742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.356754 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:34.356779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:34.451630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:34.451751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.451767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:19:34.451779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.451792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.451847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.451968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-18 04:19:34.451999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:34.452010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-18 04:19:34.452022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.452041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.564911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.565033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.565049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:34.565064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.565075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:34.565087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-18 04:19:34.565113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.565139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.565153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:34.565166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:34.565178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:34.565189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-18 04:19:34.565209 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:34.565230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-18 04:19:51.394385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-18 04:19:51.394504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-18 04:19:51.394520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-18 04:19:51.394529 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:51.394540 | orchestrator | 2026-03-18 04:19:51.394550 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-18 04:19:51.394560 | orchestrator | Wednesday 18 March 2026 04:19:35 +0000 (0:00:02.562) 0:05:44.093 ******* 2026-03-18 04:19:51.394570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394595 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:19:51.394603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394651 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:19:51.394660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:19:51.394696 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:19:51.394706 | orchestrator | 2026-03-18 04:19:51.394715 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-18 04:19:51.394725 | orchestrator | Wednesday 18 March 2026 04:19:38 +0000 (0:00:02.947) 0:05:47.040 ******* 2026-03-18 04:19:51.394734 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:19:51.394744 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:19:51.394753 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:19:51.394762 | orchestrator | 2026-03-18 04:19:51.394771 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-18 04:19:51.394786 | orchestrator | Wednesday 18 March 2026 04:19:40 +0000 (0:00:02.359) 0:05:49.400 ******* 2026-03-18 04:19:51.394795 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:19:51.394804 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:19:51.394813 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:19:51.394822 | orchestrator | 2026-03-18 04:19:51.394832 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-18 04:19:51.394880 | orchestrator | Wednesday 18 March 2026 04:19:44 +0000 (0:00:03.030) 0:05:52.430 ******* 2026-03-18 04:19:51.394890 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:19:51.394899 | orchestrator | 2026-03-18 04:19:51.394908 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-18 04:19:51.394917 | orchestrator | Wednesday 18 March 2026 04:19:46 +0000 (0:00:02.438) 0:05:54.869 ******* 2026-03-18 04:19:51.394927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:19:51.394939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:19:51.394966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:20:08.805375 | orchestrator | 2026-03-18 04:20:08.805473 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-18 04:20:08.805491 | orchestrator | Wednesday 18 March 2026 04:19:51 +0000 (0:00:04.909) 0:05:59.779 ******* 2026-03-18 04:20:08.805521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:20:08.805537 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:08.805551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:20:08.805582 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:08.805595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:20:08.805606 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:08.805617 | orchestrator | 2026-03-18 04:20:08.805629 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-18 04:20:08.805639 | orchestrator | Wednesday 18 March 2026 04:19:53 +0000 (0:00:01.686) 0:06:01.466 ******* 2026-03-18 04:20:08.805652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805693 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:08.805709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805732 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:08.805743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:20:08.805766 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:08.805777 | orchestrator | 2026-03-18 04:20:08.805788 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-18 04:20:08.805799 | orchestrator | Wednesday 18 March 2026 04:19:55 +0000 (0:00:01.987) 0:06:03.453 ******* 2026-03-18 04:20:08.805810 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:08.805821 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:08.805839 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:08.805850 | orchestrator | 2026-03-18 04:20:08.805930 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-18 04:20:08.805951 | orchestrator | Wednesday 18 March 2026 04:19:57 +0000 (0:00:02.414) 0:06:05.868 ******* 2026-03-18 04:20:08.805971 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:08.805989 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:08.806007 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:08.806085 | orchestrator | 2026-03-18 04:20:08.806109 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-18 04:20:08.806130 | orchestrator | Wednesday 18 March 2026 04:20:00 +0000 (0:00:03.101) 0:06:08.969 ******* 2026-03-18 04:20:08.806150 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:20:08.806170 | orchestrator | 2026-03-18 04:20:08.806191 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-18 04:20:08.806212 | orchestrator | Wednesday 18 March 2026 04:20:03 +0000 (0:00:02.537) 0:06:11.507 ******* 2026-03-18 04:20:08.806235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:08.806276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:09.956557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:09.956671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:09.956688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:09.956746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:20:09.956784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:09.956804 | orchestrator | 2026-03-18 04:20:09.956816 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-18 04:20:09.956833 | orchestrator | Wednesday 18 March 2026 04:20:09 +0000 (0:00:06.839) 0:06:18.347 ******* 2026-03-18 04:20:10.800601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:10.800710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:10.800728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:10.800742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:10.800755 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:10.800791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:10.800813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:10.800825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:10.800837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:10.800849 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:10.800919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:10.800960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:20:33.561137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-18 04:20:33.561258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-18 04:20:33.561276 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:33.561291 | orchestrator | 2026-03-18 04:20:33.561303 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-18 04:20:33.561315 | orchestrator | Wednesday 18 March 2026 04:20:11 +0000 (0:00:01.944) 0:06:20.292 ******* 2026-03-18 04:20:33.561327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561377 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:33.561389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561473 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:33.561484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:20:33.561546 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:33.561557 | orchestrator | 2026-03-18 04:20:33.561568 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-18 04:20:33.561579 | orchestrator | Wednesday 18 March 2026 04:20:14 +0000 (0:00:02.895) 0:06:23.187 ******* 2026-03-18 04:20:33.561589 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:33.561600 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:33.561611 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:33.561621 | orchestrator | 2026-03-18 04:20:33.561632 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-18 04:20:33.561643 | orchestrator | Wednesday 18 March 2026 04:20:17 +0000 (0:00:02.522) 0:06:25.710 ******* 2026-03-18 04:20:33.561656 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:33.561668 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:33.561681 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:33.561693 | orchestrator | 2026-03-18 04:20:33.561705 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-18 04:20:33.561717 | orchestrator | Wednesday 18 March 2026 04:20:20 +0000 (0:00:03.278) 0:06:28.989 ******* 2026-03-18 04:20:33.561729 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:20:33.561741 | orchestrator | 2026-03-18 04:20:33.561753 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-18 04:20:33.561765 | orchestrator | Wednesday 18 March 2026 04:20:23 +0000 (0:00:03.019) 0:06:32.008 ******* 2026-03-18 04:20:33.561777 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-18 04:20:33.561790 | orchestrator | 2026-03-18 04:20:33.561802 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-18 04:20:33.561814 | orchestrator | Wednesday 18 March 2026 04:20:25 +0000 (0:00:01.954) 0:06:33.963 ******* 2026-03-18 04:20:33.561828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 04:20:33.561852 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 04:20:33.561871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-18 04:20:33.561910 | orchestrator | 2026-03-18 04:20:33.561932 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-18 04:20:33.561955 | orchestrator | Wednesday 18 March 2026 04:20:31 +0000 (0:00:05.561) 0:06:39.524 ******* 2026-03-18 04:20:33.561976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:33.562000 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:57.416608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.416724 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:57.416744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.416758 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:57.416771 | orchestrator | 2026-03-18 04:20:57.416783 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-18 04:20:57.416795 | orchestrator | Wednesday 18 March 2026 04:20:33 +0000 (0:00:02.420) 0:06:41.945 ******* 2026-03-18 04:20:57.416808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416864 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:57.416875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416898 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:57.416909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-18 04:20:57.416999 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:57.417010 | orchestrator | 2026-03-18 04:20:57.417021 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 04:20:57.417033 | orchestrator | Wednesday 18 March 2026 04:20:36 +0000 (0:00:02.672) 0:06:44.618 ******* 2026-03-18 04:20:57.417044 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:57.417055 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:57.417066 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:57.417077 | orchestrator | 2026-03-18 04:20:57.417088 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 04:20:57.417114 | orchestrator | Wednesday 18 March 2026 04:20:40 +0000 (0:00:03.796) 0:06:48.414 ******* 2026-03-18 04:20:57.417125 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:57.417136 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:57.417146 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:57.417157 | orchestrator | 2026-03-18 04:20:57.417169 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-18 04:20:57.417180 | orchestrator | Wednesday 18 March 2026 04:20:44 +0000 (0:00:04.075) 0:06:52.490 ******* 2026-03-18 04:20:57.417193 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-18 04:20:57.417205 | orchestrator | 2026-03-18 04:20:57.417215 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-18 04:20:57.417227 | orchestrator | Wednesday 18 March 2026 04:20:45 +0000 (0:00:01.744) 0:06:54.234 ******* 2026-03-18 04:20:57.417257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417270 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:57.417282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417303 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:57.417315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417326 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:57.417336 | orchestrator | 2026-03-18 04:20:57.417347 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-18 04:20:57.417358 | orchestrator | Wednesday 18 March 2026 04:20:48 +0000 (0:00:02.510) 0:06:56.745 ******* 2026-03-18 04:20:57.417370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417381 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:57.417392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417403 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:57.417419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-18 04:20:57.417431 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:57.417442 | orchestrator | 2026-03-18 04:20:57.417452 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-18 04:20:57.417463 | orchestrator | Wednesday 18 March 2026 04:20:51 +0000 (0:00:02.730) 0:06:59.475 ******* 2026-03-18 04:20:57.417474 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:20:57.417485 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:20:57.417496 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:20:57.417507 | orchestrator | 2026-03-18 04:20:57.417517 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 04:20:57.417528 | orchestrator | Wednesday 18 March 2026 04:20:53 +0000 (0:00:02.412) 0:07:01.888 ******* 2026-03-18 04:20:57.417539 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:20:57.417549 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:20:57.417560 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:20:57.417571 | orchestrator | 2026-03-18 04:20:57.417582 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 04:20:57.417600 | orchestrator | Wednesday 18 March 2026 04:20:57 +0000 (0:00:03.912) 0:07:05.800 ******* 2026-03-18 04:21:26.632715 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:21:26.632834 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:21:26.632850 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:21:26.632863 | orchestrator | 2026-03-18 04:21:26.632876 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-18 04:21:26.632889 | orchestrator | Wednesday 18 March 2026 04:21:01 +0000 (0:00:04.023) 0:07:09.824 ******* 2026-03-18 04:21:26.632901 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-18 04:21:26.632914 | orchestrator | 2026-03-18 04:21:26.632926 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-18 04:21:26.632939 | orchestrator | Wednesday 18 March 2026 04:21:03 +0000 (0:00:02.390) 0:07:12.215 ******* 2026-03-18 04:21:26.633006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633022 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:26.633034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633047 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:26.633058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633074 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:26.633086 | orchestrator | 2026-03-18 04:21:26.633098 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-18 04:21:26.633110 | orchestrator | Wednesday 18 March 2026 04:21:06 +0000 (0:00:02.464) 0:07:14.679 ******* 2026-03-18 04:21:26.633121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633133 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:26.633161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633195 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:26.633227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-18 04:21:26.633239 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:26.633252 | orchestrator | 2026-03-18 04:21:26.633265 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-18 04:21:26.633277 | orchestrator | Wednesday 18 March 2026 04:21:08 +0000 (0:00:02.499) 0:07:17.179 ******* 2026-03-18 04:21:26.633289 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:26.633301 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:26.633314 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:26.633327 | orchestrator | 2026-03-18 04:21:26.633339 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-18 04:21:26.633349 | orchestrator | Wednesday 18 March 2026 04:21:11 +0000 (0:00:02.577) 0:07:19.756 ******* 2026-03-18 04:21:26.633360 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:21:26.633371 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:21:26.633381 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:21:26.633392 | orchestrator | 2026-03-18 04:21:26.633403 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-18 04:21:26.633413 | orchestrator | Wednesday 18 March 2026 04:21:14 +0000 (0:00:03.612) 0:07:23.369 ******* 2026-03-18 04:21:26.633424 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:21:26.633435 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:21:26.633445 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:21:26.633456 | orchestrator | 2026-03-18 04:21:26.633467 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-18 04:21:26.633477 | orchestrator | Wednesday 18 March 2026 04:21:19 +0000 (0:00:04.575) 0:07:27.944 ******* 2026-03-18 04:21:26.633488 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:21:26.633499 | orchestrator | 2026-03-18 04:21:26.633510 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-18 04:21:26.633521 | orchestrator | Wednesday 18 March 2026 04:21:22 +0000 (0:00:02.696) 0:07:30.640 ******* 2026-03-18 04:21:26.633533 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 04:21:26.633547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:26.633573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 04:21:26.633595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:29.002585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:29.002661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:29.002704 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-18 04:21:29.002717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:29.002729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:29.002764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:29.002777 | orchestrator | 2026-03-18 04:21:29.002790 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-18 04:21:29.002802 | orchestrator | Wednesday 18 March 2026 04:21:27 +0000 (0:00:05.612) 0:07:36.253 ******* 2026-03-18 04:21:29.002822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 04:21:30.141268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:30.141369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:30.141386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:30.141423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:30.141440 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:30.141555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 04:21:30.141592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:30.141626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:30.141639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:30.141661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:30.141672 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:30.141689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-18 04:21:30.141702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-18 04:21:30.141720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-18 04:21:48.310721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-18 04:21:48.310837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-18 04:21:48.310880 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:48.310894 | orchestrator | 2026-03-18 04:21:48.310907 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-18 04:21:48.310919 | orchestrator | Wednesday 18 March 2026 04:21:30 +0000 (0:00:02.279) 0:07:38.532 ******* 2026-03-18 04:21:48.310932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.310945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.310958 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:48.311030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.311044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.311055 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:48.311067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.311094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-18 04:21:48.311106 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:48.311117 | orchestrator | 2026-03-18 04:21:48.311128 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-18 04:21:48.311140 | orchestrator | Wednesday 18 March 2026 04:21:32 +0000 (0:00:02.272) 0:07:40.804 ******* 2026-03-18 04:21:48.311151 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:21:48.311164 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:21:48.311174 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:21:48.311187 | orchestrator | 2026-03-18 04:21:48.311197 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-18 04:21:48.311208 | orchestrator | Wednesday 18 March 2026 04:21:34 +0000 (0:00:02.273) 0:07:43.078 ******* 2026-03-18 04:21:48.311218 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:21:48.311229 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:21:48.311239 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:21:48.311248 | orchestrator | 2026-03-18 04:21:48.311259 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-18 04:21:48.311270 | orchestrator | Wednesday 18 March 2026 04:21:38 +0000 (0:00:03.871) 0:07:46.949 ******* 2026-03-18 04:21:48.311281 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:21:48.311292 | orchestrator | 2026-03-18 04:21:48.311303 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-18 04:21:48.311314 | orchestrator | Wednesday 18 March 2026 04:21:41 +0000 (0:00:02.639) 0:07:49.588 ******* 2026-03-18 04:21:48.311345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:21:48.311372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:21:48.311385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:21:48.311403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:21:48.311425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:21:52.584270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:21:52.584395 | orchestrator | 2026-03-18 04:21:52.584424 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-18 04:21:52.584443 | orchestrator | Wednesday 18 March 2026 04:21:48 +0000 (0:00:07.104) 0:07:56.693 ******* 2026-03-18 04:21:52.584487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:21:52.584514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:21:52.584563 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:21:52.584603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:21:52.584626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:21:52.584645 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:21:52.584673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:21:52.584695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:21:52.584727 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:21:52.584739 | orchestrator | 2026-03-18 04:21:52.584751 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-18 04:21:52.584762 | orchestrator | Wednesday 18 March 2026 04:21:50 +0000 (0:00:02.305) 0:07:58.999 ******* 2026-03-18 04:21:52.584775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:21:52.584798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949380 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:01.949413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:01.949435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949476 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:01.949495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:01.949533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-18 04:22:01.949570 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:01.949608 | orchestrator | 2026-03-18 04:22:01.949626 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-18 04:22:01.949643 | orchestrator | Wednesday 18 March 2026 04:21:52 +0000 (0:00:01.975) 0:08:00.975 ******* 2026-03-18 04:22:01.949658 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:01.949674 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:01.949691 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:01.949707 | orchestrator | 2026-03-18 04:22:01.949724 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-18 04:22:01.949742 | orchestrator | Wednesday 18 March 2026 04:21:54 +0000 (0:00:01.545) 0:08:02.520 ******* 2026-03-18 04:22:01.949759 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:01.949776 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:01.949792 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:01.949807 | orchestrator | 2026-03-18 04:22:01.949824 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-18 04:22:01.949841 | orchestrator | Wednesday 18 March 2026 04:21:56 +0000 (0:00:02.492) 0:08:05.013 ******* 2026-03-18 04:22:01.949860 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:22:01.949877 | orchestrator | 2026-03-18 04:22:01.949893 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-18 04:22:01.949910 | orchestrator | Wednesday 18 March 2026 04:21:59 +0000 (0:00:02.678) 0:08:07.692 ******* 2026-03-18 04:22:01.949958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-18 04:22:01.950088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:01.950120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:01.950150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-18 04:22:01.950186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:01.950206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:01.950225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:01.950268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:03.999818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:03.999923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:03.999981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-18 04:22:04.000068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:04.000083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:04.000094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:04.000125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:04.000144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:22:04.000165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:04.000178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:04.000190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:04.000201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:04.000221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:22:06.262721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:06.262824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.262840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.262851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:06.262864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:22:06.262894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:06.262932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.262943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.262953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:06.262964 | orchestrator | 2026-03-18 04:22:06.262976 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-18 04:22:06.263035 | orchestrator | Wednesday 18 March 2026 04:22:05 +0000 (0:00:05.991) 0:08:13.683 ******* 2026-03-18 04:22:06.263049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-18 04:22:06.263060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:06.263088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:06.498776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:22:06.498788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:06.498818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:06.498870 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:06.498882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-18 04:22:06.498892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:06.498902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:06.498927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:06.498948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:22:07.751443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:07.751558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:07.751576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:07.751613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:07.751626 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:07.751657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-18 04:22:07.751691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-18 04:22:07.751704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:07.751716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:07.751727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-18 04:22:07.751748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:22:07.751766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-18 04:22:07.751786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:20.027146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:22:20.027265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-18 04:22:20.027284 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:20.027298 | orchestrator | 2026-03-18 04:22:20.027311 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-18 04:22:20.027347 | orchestrator | Wednesday 18 March 2026 04:22:07 +0000 (0:00:02.457) 0:08:16.140 ******* 2026-03-18 04:22:20.027360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027429 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:20.027441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027520 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:20.027531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-18 04:22:20.027562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-18 04:22:20.027585 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:20.027596 | orchestrator | 2026-03-18 04:22:20.027610 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-18 04:22:20.027622 | orchestrator | Wednesday 18 March 2026 04:22:09 +0000 (0:00:01.921) 0:08:18.062 ******* 2026-03-18 04:22:20.027635 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:20.027648 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:20.027661 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:20.027674 | orchestrator | 2026-03-18 04:22:20.027687 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-18 04:22:20.027700 | orchestrator | Wednesday 18 March 2026 04:22:11 +0000 (0:00:01.970) 0:08:20.033 ******* 2026-03-18 04:22:20.027712 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:20.027726 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:20.027738 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:20.027750 | orchestrator | 2026-03-18 04:22:20.027762 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-18 04:22:20.027775 | orchestrator | Wednesday 18 March 2026 04:22:13 +0000 (0:00:02.237) 0:08:22.270 ******* 2026-03-18 04:22:20.027789 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:22:20.027802 | orchestrator | 2026-03-18 04:22:20.027815 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-18 04:22:20.027828 | orchestrator | Wednesday 18 March 2026 04:22:16 +0000 (0:00:02.465) 0:08:24.735 ******* 2026-03-18 04:22:20.027848 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:22:20.027875 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:22:38.172115 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:22:38.172231 | orchestrator | 2026-03-18 04:22:38.172248 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-18 04:22:38.172260 | orchestrator | Wednesday 18 March 2026 04:22:20 +0000 (0:00:03.673) 0:08:28.409 ******* 2026-03-18 04:22:38.172272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:22:38.172284 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:38.172311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:22:38.172322 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:38.172349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:22:38.172383 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:38.172394 | orchestrator | 2026-03-18 04:22:38.172404 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-18 04:22:38.172414 | orchestrator | Wednesday 18 March 2026 04:22:21 +0000 (0:00:01.475) 0:08:29.885 ******* 2026-03-18 04:22:38.172425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 04:22:38.172436 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:38.172446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 04:22:38.172455 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:38.172465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-18 04:22:38.172474 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:38.172484 | orchestrator | 2026-03-18 04:22:38.172494 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-18 04:22:38.172503 | orchestrator | Wednesday 18 March 2026 04:22:23 +0000 (0:00:01.626) 0:08:31.512 ******* 2026-03-18 04:22:38.172513 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:38.172523 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:38.172533 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:38.172542 | orchestrator | 2026-03-18 04:22:38.172553 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-18 04:22:38.172563 | orchestrator | Wednesday 18 March 2026 04:22:25 +0000 (0:00:02.225) 0:08:33.737 ******* 2026-03-18 04:22:38.172572 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:38.172582 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:38.172592 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:22:38.172601 | orchestrator | 2026-03-18 04:22:38.172611 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-18 04:22:38.172621 | orchestrator | Wednesday 18 March 2026 04:22:27 +0000 (0:00:02.384) 0:08:36.122 ******* 2026-03-18 04:22:38.172633 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:22:38.172644 | orchestrator | 2026-03-18 04:22:38.172655 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-18 04:22:38.172666 | orchestrator | Wednesday 18 March 2026 04:22:30 +0000 (0:00:02.321) 0:08:38.444 ******* 2026-03-18 04:22:38.172684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-18 04:22:38.172706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-18 04:22:38.172727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-18 04:22:39.868266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:22:39.868403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:22:39.868456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-18 04:22:39.868471 | orchestrator | 2026-03-18 04:22:39.868485 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-18 04:22:39.868501 | orchestrator | Wednesday 18 March 2026 04:22:38 +0000 (0:00:08.113) 0:08:46.557 ******* 2026-03-18 04:22:39.868548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-18 04:22:39.868572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:22:39.868597 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:22:39.868610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-18 04:22:39.868623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:22:39.868634 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:22:39.868656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-18 04:23:01.734806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-18 04:23:01.734951 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.734970 | orchestrator | 2026-03-18 04:23:01.734985 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-18 04:23:01.734998 | orchestrator | Wednesday 18 March 2026 04:22:39 +0000 (0:00:01.700) 0:08:48.257 ******* 2026-03-18 04:23:01.735016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735134 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735190 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-18 04:23:01.735244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-18 04:23:01.735282 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735295 | orchestrator | 2026-03-18 04:23:01.735310 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-18 04:23:01.735323 | orchestrator | Wednesday 18 March 2026 04:22:41 +0000 (0:00:02.145) 0:08:50.403 ******* 2026-03-18 04:23:01.735336 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:23:01.735349 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:23:01.735363 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:23:01.735376 | orchestrator | 2026-03-18 04:23:01.735388 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-18 04:23:01.735401 | orchestrator | Wednesday 18 March 2026 04:22:44 +0000 (0:00:02.402) 0:08:52.805 ******* 2026-03-18 04:23:01.735414 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:23:01.735427 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:23:01.735439 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:23:01.735452 | orchestrator | 2026-03-18 04:23:01.735464 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-18 04:23:01.735476 | orchestrator | Wednesday 18 March 2026 04:22:47 +0000 (0:00:03.162) 0:08:55.968 ******* 2026-03-18 04:23:01.735489 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735501 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735519 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735532 | orchestrator | 2026-03-18 04:23:01.735545 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-18 04:23:01.735558 | orchestrator | Wednesday 18 March 2026 04:22:48 +0000 (0:00:01.373) 0:08:57.342 ******* 2026-03-18 04:23:01.735571 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735582 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735592 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735603 | orchestrator | 2026-03-18 04:23:01.735614 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-18 04:23:01.735625 | orchestrator | Wednesday 18 March 2026 04:22:50 +0000 (0:00:01.388) 0:08:58.730 ******* 2026-03-18 04:23:01.735636 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735647 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735658 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735669 | orchestrator | 2026-03-18 04:23:01.735679 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-18 04:23:01.735690 | orchestrator | Wednesday 18 March 2026 04:22:52 +0000 (0:00:01.793) 0:09:00.524 ******* 2026-03-18 04:23:01.735701 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735711 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735722 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735734 | orchestrator | 2026-03-18 04:23:01.735744 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-18 04:23:01.735755 | orchestrator | Wednesday 18 March 2026 04:22:53 +0000 (0:00:01.413) 0:09:01.938 ******* 2026-03-18 04:23:01.735766 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:01.735777 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:23:01.735788 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:23:01.735799 | orchestrator | 2026-03-18 04:23:01.735810 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-18 04:23:01.735821 | orchestrator | Wednesday 18 March 2026 04:22:54 +0000 (0:00:01.441) 0:09:03.379 ******* 2026-03-18 04:23:01.735831 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:23:01.735843 | orchestrator | 2026-03-18 04:23:01.735854 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-18 04:23:01.735865 | orchestrator | Wednesday 18 March 2026 04:22:57 +0000 (0:00:02.764) 0:09:06.144 ******* 2026-03-18 04:23:01.735884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-18 04:23:01.735905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-18 04:23:05.788980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-18 04:23:05.789181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:23:05.789216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:23:05.789237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-18 04:23:05.789286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:23:05.789310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:23:05.789356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-18 04:23:05.789379 | orchestrator | 2026-03-18 04:23:05.789400 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-18 04:23:05.789419 | orchestrator | Wednesday 18 March 2026 04:23:01 +0000 (0:00:03.971) 0:09:10.116 ******* 2026-03-18 04:23:05.789438 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:23:05.789459 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:23:05.789479 | orchestrator | } 2026-03-18 04:23:05.789501 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:23:05.789521 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:23:05.789542 | orchestrator | } 2026-03-18 04:23:05.789562 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:23:05.789581 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:23:05.789600 | orchestrator | } 2026-03-18 04:23:05.789619 | orchestrator | 2026-03-18 04:23:05.789639 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:23:05.789658 | orchestrator | Wednesday 18 March 2026 04:23:03 +0000 (0:00:01.546) 0:09:11.662 ******* 2026-03-18 04:23:05.789689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-18 04:23:05.789710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:23:05.789744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:23:05.789764 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:23:05.789784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-18 04:23:05.789804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:23:05.789837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:25:07.644542 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.644624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-18 04:25:07.644648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-18 04:25:07.644673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-18 04:25:07.644683 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.644691 | orchestrator | 2026-03-18 04:25:07.644700 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-18 04:25:07.644709 | orchestrator | Wednesday 18 March 2026 04:23:05 +0000 (0:00:02.510) 0:09:14.173 ******* 2026-03-18 04:25:07.644717 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.644726 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.644733 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.644741 | orchestrator | 2026-03-18 04:25:07.644749 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-18 04:25:07.644757 | orchestrator | Wednesday 18 March 2026 04:23:07 +0000 (0:00:01.702) 0:09:15.875 ******* 2026-03-18 04:25:07.644765 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.644773 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.644780 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.644789 | orchestrator | 2026-03-18 04:25:07.644803 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-18 04:25:07.644816 | orchestrator | Wednesday 18 March 2026 04:23:08 +0000 (0:00:01.419) 0:09:17.295 ******* 2026-03-18 04:25:07.644830 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.644844 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.644858 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.644872 | orchestrator | 2026-03-18 04:25:07.644885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-18 04:25:07.644895 | orchestrator | Wednesday 18 March 2026 04:23:16 +0000 (0:00:07.127) 0:09:24.422 ******* 2026-03-18 04:25:07.644903 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.644910 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.644918 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.644926 | orchestrator | 2026-03-18 04:25:07.644934 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-18 04:25:07.644941 | orchestrator | Wednesday 18 March 2026 04:23:23 +0000 (0:00:07.535) 0:09:31.958 ******* 2026-03-18 04:25:07.644949 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.644957 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.644965 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.644972 | orchestrator | 2026-03-18 04:25:07.644980 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-18 04:25:07.644988 | orchestrator | Wednesday 18 March 2026 04:23:30 +0000 (0:00:07.161) 0:09:39.119 ******* 2026-03-18 04:25:07.644996 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.645004 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.645012 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.645019 | orchestrator | 2026-03-18 04:25:07.645027 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-18 04:25:07.645035 | orchestrator | Wednesday 18 March 2026 04:23:38 +0000 (0:00:07.719) 0:09:46.839 ******* 2026-03-18 04:25:07.645043 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.645051 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.645059 | orchestrator | 2026-03-18 04:25:07.645066 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-18 04:25:07.645074 | orchestrator | Wednesday 18 March 2026 04:23:42 +0000 (0:00:03.699) 0:09:50.539 ******* 2026-03-18 04:25:07.645093 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.645101 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.645109 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.645125 | orchestrator | 2026-03-18 04:25:07.645147 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-18 04:25:07.645157 | orchestrator | Wednesday 18 March 2026 04:23:55 +0000 (0:00:13.543) 0:10:04.082 ******* 2026-03-18 04:25:07.645213 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.645227 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.645237 | orchestrator | 2026-03-18 04:25:07.645246 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-18 04:25:07.645255 | orchestrator | Wednesday 18 March 2026 04:23:59 +0000 (0:00:03.753) 0:10:07.836 ******* 2026-03-18 04:25:07.645264 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:07.645274 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:25:07.645283 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:25:07.645292 | orchestrator | 2026-03-18 04:25:07.645300 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-18 04:25:07.645308 | orchestrator | Wednesday 18 March 2026 04:24:06 +0000 (0:00:07.150) 0:10:14.986 ******* 2026-03-18 04:25:07.645315 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645323 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645331 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645338 | orchestrator | 2026-03-18 04:25:07.645346 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-18 04:25:07.645358 | orchestrator | Wednesday 18 March 2026 04:24:13 +0000 (0:00:06.901) 0:10:21.887 ******* 2026-03-18 04:25:07.645366 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645381 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645389 | orchestrator | 2026-03-18 04:25:07.645397 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-18 04:25:07.645404 | orchestrator | Wednesday 18 March 2026 04:24:20 +0000 (0:00:06.855) 0:10:28.743 ******* 2026-03-18 04:25:07.645412 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645420 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645428 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645435 | orchestrator | 2026-03-18 04:25:07.645443 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-18 04:25:07.645451 | orchestrator | Wednesday 18 March 2026 04:24:27 +0000 (0:00:06.887) 0:10:35.630 ******* 2026-03-18 04:25:07.645458 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645466 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645474 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645482 | orchestrator | 2026-03-18 04:25:07.645489 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-18 04:25:07.645497 | orchestrator | Wednesday 18 March 2026 04:24:34 +0000 (0:00:07.419) 0:10:43.050 ******* 2026-03-18 04:25:07.645505 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.645512 | orchestrator | 2026-03-18 04:25:07.645520 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-18 04:25:07.645528 | orchestrator | Wednesday 18 March 2026 04:24:38 +0000 (0:00:03.571) 0:10:46.622 ******* 2026-03-18 04:25:07.645536 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645543 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645551 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645559 | orchestrator | 2026-03-18 04:25:07.645566 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-18 04:25:07.645574 | orchestrator | Wednesday 18 March 2026 04:24:51 +0000 (0:00:12.995) 0:10:59.617 ******* 2026-03-18 04:25:07.645582 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.645590 | orchestrator | 2026-03-18 04:25:07.645597 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-18 04:25:07.645605 | orchestrator | Wednesday 18 March 2026 04:24:55 +0000 (0:00:04.751) 0:11:04.369 ******* 2026-03-18 04:25:07.645613 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:07.645620 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:07.645635 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:25:07.645642 | orchestrator | 2026-03-18 04:25:07.645650 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-18 04:25:07.645658 | orchestrator | Wednesday 18 March 2026 04:25:02 +0000 (0:00:06.814) 0:11:11.183 ******* 2026-03-18 04:25:07.645665 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.645673 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.645681 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.645688 | orchestrator | 2026-03-18 04:25:07.645696 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-18 04:25:07.645704 | orchestrator | Wednesday 18 March 2026 04:25:04 +0000 (0:00:02.005) 0:11:13.189 ******* 2026-03-18 04:25:07.645711 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:07.645719 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:07.645727 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:07.645734 | orchestrator | 2026-03-18 04:25:07.645742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:25:07.645751 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-18 04:25:07.645759 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-18 04:25:07.645767 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-18 04:25:07.645775 | orchestrator | 2026-03-18 04:25:07.645783 | orchestrator | 2026-03-18 04:25:07.645791 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:25:07.645799 | orchestrator | Wednesday 18 March 2026 04:25:07 +0000 (0:00:02.833) 0:11:16.023 ******* 2026-03-18 04:25:07.645806 | orchestrator | =============================================================================== 2026-03-18 04:25:07.645814 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.54s 2026-03-18 04:25:07.645822 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.00s 2026-03-18 04:25:07.645834 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.11s 2026-03-18 04:25:07.645854 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.72s 2026-03-18 04:25:08.283466 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.54s 2026-03-18 04:25:08.283553 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.42s 2026-03-18 04:25:08.283567 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.16s 2026-03-18 04:25:08.283579 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.15s 2026-03-18 04:25:08.283590 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.13s 2026-03-18 04:25:08.283601 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.10s 2026-03-18 04:25:08.283611 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.90s 2026-03-18 04:25:08.283622 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.89s 2026-03-18 04:25:08.283635 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.86s 2026-03-18 04:25:08.283655 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.84s 2026-03-18 04:25:08.283692 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.81s 2026-03-18 04:25:08.283713 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.01s 2026-03-18 04:25:08.283732 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.99s 2026-03-18 04:25:08.283752 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.97s 2026-03-18 04:25:08.283771 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.61s 2026-03-18 04:25:08.283805 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.56s 2026-03-18 04:25:08.490269 | orchestrator | + osism apply -a upgrade opensearch 2026-03-18 04:25:10.331603 | orchestrator | 2026-03-18 04:25:10 | INFO  | Task 02d3c1fc-27b9-4043-8599-d54b57c74da2 (opensearch) was prepared for execution. 2026-03-18 04:25:10.331712 | orchestrator | 2026-03-18 04:25:10 | INFO  | It takes a moment until task 02d3c1fc-27b9-4043-8599-d54b57c74da2 (opensearch) has been started and output is visible here. 2026-03-18 04:25:21.452348 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:25:21.452433 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:25:21.452447 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:25:21.452452 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:25:21.452462 | orchestrator | 2026-03-18 04:25:21.452468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:25:21.452472 | orchestrator | 2026-03-18 04:25:21.452478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:25:21.452483 | orchestrator | Wednesday 18 March 2026 04:25:15 +0000 (0:00:01.101) 0:00:01.101 ******* 2026-03-18 04:25:21.452488 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:21.452493 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:21.452498 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:21.452503 | orchestrator | 2026-03-18 04:25:21.452508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:25:21.452512 | orchestrator | Wednesday 18 March 2026 04:25:16 +0000 (0:00:00.979) 0:00:02.080 ******* 2026-03-18 04:25:21.452517 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-18 04:25:21.452522 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-18 04:25:21.452527 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-18 04:25:21.452532 | orchestrator | 2026-03-18 04:25:21.452536 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-18 04:25:21.452541 | orchestrator | 2026-03-18 04:25:21.452546 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 04:25:21.452551 | orchestrator | Wednesday 18 March 2026 04:25:17 +0000 (0:00:00.979) 0:00:03.060 ******* 2026-03-18 04:25:21.452556 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:25:21.452561 | orchestrator | 2026-03-18 04:25:21.452566 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-18 04:25:21.452570 | orchestrator | Wednesday 18 March 2026 04:25:18 +0000 (0:00:01.131) 0:00:04.192 ******* 2026-03-18 04:25:21.452575 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 04:25:21.452580 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 04:25:21.452585 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-18 04:25:21.452589 | orchestrator | 2026-03-18 04:25:21.452594 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-18 04:25:21.452599 | orchestrator | Wednesday 18 March 2026 04:25:19 +0000 (0:00:01.445) 0:00:05.637 ******* 2026-03-18 04:25:21.452606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:21.452641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:21.452658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:21.452665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:21.452672 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:21.452688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:25.982392 | orchestrator | 2026-03-18 04:25:25.982493 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 04:25:25.982515 | orchestrator | Wednesday 18 March 2026 04:25:21 +0000 (0:00:01.440) 0:00:07.078 ******* 2026-03-18 04:25:25.982536 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:25:25.982555 | orchestrator | 2026-03-18 04:25:25.982574 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-18 04:25:25.982594 | orchestrator | Wednesday 18 March 2026 04:25:22 +0000 (0:00:00.963) 0:00:08.042 ******* 2026-03-18 04:25:25.982617 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:25.982632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:25.982682 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:25.982717 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:25.982732 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:25.982745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:25.982764 | orchestrator | 2026-03-18 04:25:25.982776 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-18 04:25:25.982787 | orchestrator | Wednesday 18 March 2026 04:25:25 +0000 (0:00:02.660) 0:00:10.703 ******* 2026-03-18 04:25:25.982805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:25.982827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:27.001679 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:27.001779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:27.001817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:27.001843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:27.001872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:27.001883 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:27.001892 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:27.001901 | orchestrator | 2026-03-18 04:25:27.001911 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-18 04:25:27.001921 | orchestrator | Wednesday 18 March 2026 04:25:25 +0000 (0:00:00.914) 0:00:11.617 ******* 2026-03-18 04:25:27.001931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:27.001953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:27.001963 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:25:27.001972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:27.001989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:29.653862 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:25:29.653967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:25:29.654005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:25:29.654082 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:25:29.654096 | orchestrator | 2026-03-18 04:25:29.654108 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-18 04:25:29.654120 | orchestrator | Wednesday 18 March 2026 04:25:26 +0000 (0:00:01.018) 0:00:12.636 ******* 2026-03-18 04:25:29.654132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:29.654246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:29.654284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:29.654329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:29.654344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:29.654367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:38.525651 | orchestrator | 2026-03-18 04:25:38.525789 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-18 04:25:38.525809 | orchestrator | Wednesday 18 March 2026 04:25:29 +0000 (0:00:02.649) 0:00:15.286 ******* 2026-03-18 04:25:38.525821 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:38.525833 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:38.525844 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:38.525855 | orchestrator | 2026-03-18 04:25:38.525866 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-18 04:25:38.525877 | orchestrator | Wednesday 18 March 2026 04:25:32 +0000 (0:00:02.434) 0:00:17.721 ******* 2026-03-18 04:25:38.525888 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:25:38.525899 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:25:38.525910 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:25:38.525921 | orchestrator | 2026-03-18 04:25:38.525932 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-18 04:25:38.525943 | orchestrator | Wednesday 18 March 2026 04:25:34 +0000 (0:00:02.033) 0:00:19.754 ******* 2026-03-18 04:25:38.525975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:38.525991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:38.526003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-18 04:25:38.526157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:38.526184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:38.526240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-18 04:25:38.526276 | orchestrator | 2026-03-18 04:25:38.526299 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-18 04:25:38.526317 | orchestrator | Wednesday 18 March 2026 04:25:36 +0000 (0:00:02.679) 0:00:22.434 ******* 2026-03-18 04:25:38.526331 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:25:38.526344 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:25:38.526357 | orchestrator | } 2026-03-18 04:25:38.526369 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:25:38.526381 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:25:38.526393 | orchestrator | } 2026-03-18 04:25:38.526405 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:25:38.526417 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:25:38.526429 | orchestrator | } 2026-03-18 04:25:38.526441 | orchestrator | 2026-03-18 04:25:38.526454 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:25:38.526467 | orchestrator | Wednesday 18 March 2026 04:25:37 +0000 (0:00:00.435) 0:00:22.870 ******* 2026-03-18 04:25:38.526490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:28:34.155128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:28:34.155241 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:28:34.155261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:28:34.155295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:28:34.155307 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:28:34.155335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-18 04:28:34.155399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-18 04:28:34.155411 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:28:34.155422 | orchestrator | 2026-03-18 04:28:34.155432 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 04:28:34.155443 | orchestrator | Wednesday 18 March 2026 04:25:38 +0000 (0:00:01.290) 0:00:24.161 ******* 2026-03-18 04:28:34.155466 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:28:34.155477 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:28:34.155487 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:28:34.155507 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:28:34.155517 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:28:34.155527 | orchestrator | 2026-03-18 04:28:34.155537 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 04:28:34.155547 | orchestrator | Wednesday 18 March 2026 04:25:39 +0000 (0:00:00.536) 0:00:24.698 ******* 2026-03-18 04:28:34.155556 | orchestrator | 2026-03-18 04:28:34.155566 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 04:28:34.155576 | orchestrator | Wednesday 18 March 2026 04:25:39 +0000 (0:00:00.075) 0:00:24.773 ******* 2026-03-18 04:28:34.155585 | orchestrator | 2026-03-18 04:28:34.155595 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-18 04:28:34.155604 | orchestrator | Wednesday 18 March 2026 04:25:39 +0000 (0:00:00.076) 0:00:24.849 ******* 2026-03-18 04:28:34.155614 | orchestrator | 2026-03-18 04:28:34.155623 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-18 04:28:34.155633 | orchestrator | Wednesday 18 March 2026 04:25:39 +0000 (0:00:00.075) 0:00:24.925 ******* 2026-03-18 04:28:34.155643 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:28:34.155655 | orchestrator | 2026-03-18 04:28:34.155667 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-18 04:28:34.155678 | orchestrator | Wednesday 18 March 2026 04:25:41 +0000 (0:00:02.405) 0:00:27.330 ******* 2026-03-18 04:28:34.155689 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:28:34.155700 | orchestrator | 2026-03-18 04:28:34.155711 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-18 04:28:34.155722 | orchestrator | Wednesday 18 March 2026 04:25:46 +0000 (0:00:04.528) 0:00:31.858 ******* 2026-03-18 04:28:34.155733 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:28:34.155745 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:28:34.155755 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:28:34.155767 | orchestrator | 2026-03-18 04:28:34.155778 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-18 04:28:34.155789 | orchestrator | Wednesday 18 March 2026 04:26:59 +0000 (0:01:13.251) 0:01:45.110 ******* 2026-03-18 04:28:34.155800 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:28:34.155811 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:28:34.155822 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:28:34.155833 | orchestrator | 2026-03-18 04:28:34.155844 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-18 04:28:34.155855 | orchestrator | Wednesday 18 March 2026 04:28:28 +0000 (0:01:29.050) 0:03:14.160 ******* 2026-03-18 04:28:34.155866 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:28:34.155877 | orchestrator | 2026-03-18 04:28:34.155888 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-18 04:28:34.155899 | orchestrator | Wednesday 18 March 2026 04:28:29 +0000 (0:00:00.985) 0:03:15.146 ******* 2026-03-18 04:28:34.155910 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:28:34.155922 | orchestrator | 2026-03-18 04:28:34.155933 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-18 04:28:34.155944 | orchestrator | Wednesday 18 March 2026 04:28:31 +0000 (0:00:02.282) 0:03:17.428 ******* 2026-03-18 04:28:34.155955 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:28:34.155965 | orchestrator | 2026-03-18 04:28:34.155981 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-18 04:28:36.398405 | orchestrator | Wednesday 18 March 2026 04:28:34 +0000 (0:00:02.354) 0:03:19.783 ******* 2026-03-18 04:28:36.398594 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:28:36.422144 | orchestrator | 2026-03-18 04:28:36.422221 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-18 04:28:36.422236 | orchestrator | Wednesday 18 March 2026 04:28:34 +0000 (0:00:00.255) 0:03:20.039 ******* 2026-03-18 04:28:36.422247 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:28:36.422258 | orchestrator | 2026-03-18 04:28:36.422270 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:28:36.422283 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:28:36.422296 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:28:36.422307 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:28:36.422317 | orchestrator | 2026-03-18 04:28:36.422328 | orchestrator | 2026-03-18 04:28:36.422339 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:28:36.422416 | orchestrator | Wednesday 18 March 2026 04:28:36 +0000 (0:00:01.616) 0:03:21.655 ******* 2026-03-18 04:28:36.422448 | orchestrator | =============================================================================== 2026-03-18 04:28:36.422459 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 89.05s 2026-03-18 04:28:36.422470 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.25s 2026-03-18 04:28:36.422481 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.53s 2026-03-18 04:28:36.422492 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.68s 2026-03-18 04:28:36.422503 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.66s 2026-03-18 04:28:36.422514 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.65s 2026-03-18 04:28:36.422524 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.44s 2026-03-18 04:28:36.422535 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.41s 2026-03-18 04:28:36.422546 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.35s 2026-03-18 04:28:36.422558 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.28s 2026-03-18 04:28:36.422568 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.03s 2026-03-18 04:28:36.422579 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.62s 2026-03-18 04:28:36.422590 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.45s 2026-03-18 04:28:36.422601 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.44s 2026-03-18 04:28:36.422612 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.29s 2026-03-18 04:28:36.422623 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.13s 2026-03-18 04:28:36.422634 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.02s 2026-03-18 04:28:36.422645 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.99s 2026-03-18 04:28:36.422655 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-03-18 04:28:36.422666 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.98s 2026-03-18 04:28:36.715313 | orchestrator | + osism apply -a upgrade memcached 2026-03-18 04:28:38.859447 | orchestrator | 2026-03-18 04:28:38 | INFO  | Task 4924afa7-4ca9-4e57-9389-7f68281d46fd (memcached) was prepared for execution. 2026-03-18 04:28:38.859550 | orchestrator | 2026-03-18 04:28:38 | INFO  | It takes a moment until task 4924afa7-4ca9-4e57-9389-7f68281d46fd (memcached) has been started and output is visible here. 2026-03-18 04:29:03.543224 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:29:03.543339 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:29:03.543407 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:29:03.543419 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:29:03.543442 | orchestrator | 2026-03-18 04:29:03.543454 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:29:03.543465 | orchestrator | 2026-03-18 04:29:03.543476 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:29:03.543487 | orchestrator | Wednesday 18 March 2026 04:28:44 +0000 (0:00:01.236) 0:00:01.236 ******* 2026-03-18 04:29:03.543499 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:29:03.543511 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:29:03.543522 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:29:03.543532 | orchestrator | 2026-03-18 04:29:03.543543 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:29:03.543555 | orchestrator | Wednesday 18 March 2026 04:28:45 +0000 (0:00:01.245) 0:00:02.481 ******* 2026-03-18 04:29:03.543566 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-18 04:29:03.543577 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-18 04:29:03.543588 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-18 04:29:03.543599 | orchestrator | 2026-03-18 04:29:03.543610 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-18 04:29:03.543620 | orchestrator | 2026-03-18 04:29:03.543631 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-18 04:29:03.543642 | orchestrator | Wednesday 18 March 2026 04:28:46 +0000 (0:00:00.882) 0:00:03.363 ******* 2026-03-18 04:29:03.543653 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:29:03.543664 | orchestrator | 2026-03-18 04:29:03.543675 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-18 04:29:03.543686 | orchestrator | Wednesday 18 March 2026 04:28:47 +0000 (0:00:01.199) 0:00:04.563 ******* 2026-03-18 04:29:03.543697 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-18 04:29:03.543709 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-18 04:29:03.543720 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-18 04:29:03.543731 | orchestrator | 2026-03-18 04:29:03.543742 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-18 04:29:03.543753 | orchestrator | Wednesday 18 March 2026 04:28:48 +0000 (0:00:01.189) 0:00:05.752 ******* 2026-03-18 04:29:03.543767 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-18 04:29:03.543797 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-18 04:29:03.543810 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-18 04:29:03.543823 | orchestrator | 2026-03-18 04:29:03.543836 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-18 04:29:03.543848 | orchestrator | Wednesday 18 March 2026 04:28:50 +0000 (0:00:01.914) 0:00:07.667 ******* 2026-03-18 04:29:03.543864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:29:03.543904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:29:03.543936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-18 04:29:03.543950 | orchestrator | 2026-03-18 04:29:03.543963 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-18 04:29:03.543976 | orchestrator | Wednesday 18 March 2026 04:28:52 +0000 (0:00:01.248) 0:00:08.916 ******* 2026-03-18 04:29:03.543988 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:29:03.544001 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:03.544014 | orchestrator | } 2026-03-18 04:29:03.544027 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:29:03.544039 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:03.544052 | orchestrator | } 2026-03-18 04:29:03.544065 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:29:03.544078 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:03.544090 | orchestrator | } 2026-03-18 04:29:03.544104 | orchestrator | 2026-03-18 04:29:03.544116 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:29:03.544129 | orchestrator | Wednesday 18 March 2026 04:28:52 +0000 (0:00:00.370) 0:00:09.286 ******* 2026-03-18 04:29:03.544140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:29:03.544157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:29:03.544177 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:29:03.544188 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:29:03.544210 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:29:03.544220 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:29:03.544232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-18 04:29:03.544243 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:29:03.544254 | orchestrator | 2026-03-18 04:29:03.544265 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-18 04:29:03.544275 | orchestrator | Wednesday 18 March 2026 04:28:53 +0000 (0:00:01.308) 0:00:10.595 ******* 2026-03-18 04:29:03.544286 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:29:03.544297 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:29:03.544314 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:29:03.865750 | orchestrator | 2026-03-18 04:29:03.865876 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:29:03.865905 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:03.865928 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:03.865948 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:03.865967 | orchestrator | 2026-03-18 04:29:03.865988 | orchestrator | 2026-03-18 04:29:03.866007 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:29:03.866108 | orchestrator | Wednesday 18 March 2026 04:29:03 +0000 (0:00:09.738) 0:00:20.334 ******* 2026-03-18 04:29:03.866133 | orchestrator | =============================================================================== 2026-03-18 04:29:03.866153 | orchestrator | memcached : Restart memcached container --------------------------------- 9.74s 2026-03-18 04:29:03.866176 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.92s 2026-03-18 04:29:03.866197 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.31s 2026-03-18 04:29:03.866217 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.25s 2026-03-18 04:29:03.866237 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.25s 2026-03-18 04:29:03.866258 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.20s 2026-03-18 04:29:03.866280 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.19s 2026-03-18 04:29:03.866335 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-03-18 04:29:03.866361 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.37s 2026-03-18 04:29:04.180869 | orchestrator | + osism apply -a upgrade redis 2026-03-18 04:29:06.360725 | orchestrator | 2026-03-18 04:29:06 | INFO  | Task d1c81ff6-de81-4219-bf47-f2daf42c92ec (redis) was prepared for execution. 2026-03-18 04:29:06.360830 | orchestrator | 2026-03-18 04:29:06 | INFO  | It takes a moment until task d1c81ff6-de81-4219-bf47-f2daf42c92ec (redis) has been started and output is visible here. 2026-03-18 04:29:18.546343 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:29:18.546516 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:29:18.546561 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:29:18.546571 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:29:18.546591 | orchestrator | 2026-03-18 04:29:18.546602 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:29:18.546611 | orchestrator | 2026-03-18 04:29:18.546621 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:29:18.546631 | orchestrator | Wednesday 18 March 2026 04:29:11 +0000 (0:00:01.209) 0:00:01.209 ******* 2026-03-18 04:29:18.546641 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:29:18.546652 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:29:18.546662 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:29:18.546672 | orchestrator | 2026-03-18 04:29:18.546681 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:29:18.546691 | orchestrator | Wednesday 18 March 2026 04:29:12 +0000 (0:00:00.870) 0:00:02.079 ******* 2026-03-18 04:29:18.546701 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-18 04:29:18.546711 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-18 04:29:18.546721 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-18 04:29:18.546731 | orchestrator | 2026-03-18 04:29:18.546740 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-18 04:29:18.546750 | orchestrator | 2026-03-18 04:29:18.546760 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-18 04:29:18.546770 | orchestrator | Wednesday 18 March 2026 04:29:13 +0000 (0:00:00.936) 0:00:03.016 ******* 2026-03-18 04:29:18.546779 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:29:18.546790 | orchestrator | 2026-03-18 04:29:18.546800 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-18 04:29:18.546810 | orchestrator | Wednesday 18 March 2026 04:29:14 +0000 (0:00:01.245) 0:00:04.262 ******* 2026-03-18 04:29:18.546823 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546837 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546917 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546929 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546941 | orchestrator | 2026-03-18 04:29:18.546953 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-18 04:29:18.546964 | orchestrator | Wednesday 18 March 2026 04:29:16 +0000 (0:00:01.455) 0:00:05.717 ******* 2026-03-18 04:29:18.546976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.546988 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.547007 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.547019 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:18.547042 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345319 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345496 | orchestrator | 2026-03-18 04:29:24.345517 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-18 04:29:24.345531 | orchestrator | Wednesday 18 March 2026 04:29:18 +0000 (0:00:02.144) 0:00:07.861 ******* 2026-03-18 04:29:24.345544 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345580 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345601 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345628 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345670 | orchestrator | 2026-03-18 04:29:24.345681 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-18 04:29:24.345692 | orchestrator | Wednesday 18 March 2026 04:29:22 +0000 (0:00:03.787) 0:00:11.648 ******* 2026-03-18 04:29:24.345784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:24.345884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-18 04:29:47.039626 | orchestrator | 2026-03-18 04:29:47.039741 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-18 04:29:47.039759 | orchestrator | Wednesday 18 March 2026 04:29:24 +0000 (0:00:02.017) 0:00:13.666 ******* 2026-03-18 04:29:47.039772 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:29:47.039785 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:47.039796 | orchestrator | } 2026-03-18 04:29:47.039807 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:29:47.039844 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:47.039856 | orchestrator | } 2026-03-18 04:29:47.039866 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:29:47.039877 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:29:47.039889 | orchestrator | } 2026-03-18 04:29:47.039900 | orchestrator | 2026-03-18 04:29:47.039911 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:29:47.039922 | orchestrator | Wednesday 18 March 2026 04:29:24 +0000 (0:00:00.570) 0:00:14.237 ******* 2026-03-18 04:29:47.039936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-18 04:29:47.039950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-18 04:29:47.039963 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:29:47.039974 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:29:47.039996 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:29:47.040008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-18 04:29:47.040054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-18 04:29:47.040078 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:29:47.040108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-18 04:29:47.040131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-18 04:29:47.040142 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:29:47.040153 | orchestrator | 2026-03-18 04:29:47.040166 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 04:29:47.040179 | orchestrator | Wednesday 18 March 2026 04:29:26 +0000 (0:00:01.112) 0:00:15.350 ******* 2026-03-18 04:29:47.040191 | orchestrator | 2026-03-18 04:29:47.040204 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 04:29:47.040217 | orchestrator | Wednesday 18 March 2026 04:29:26 +0000 (0:00:00.083) 0:00:15.433 ******* 2026-03-18 04:29:47.040229 | orchestrator | 2026-03-18 04:29:47.040241 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-18 04:29:47.040254 | orchestrator | Wednesday 18 March 2026 04:29:26 +0000 (0:00:00.086) 0:00:15.520 ******* 2026-03-18 04:29:47.040266 | orchestrator | 2026-03-18 04:29:47.040279 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-18 04:29:47.040292 | orchestrator | Wednesday 18 March 2026 04:29:26 +0000 (0:00:00.089) 0:00:15.610 ******* 2026-03-18 04:29:47.040304 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:29:47.040317 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:29:47.040330 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:29:47.040343 | orchestrator | 2026-03-18 04:29:47.040355 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-18 04:29:47.040368 | orchestrator | Wednesday 18 March 2026 04:29:36 +0000 (0:00:09.917) 0:00:25.528 ******* 2026-03-18 04:29:47.040380 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:29:47.040393 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:29:47.040455 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:29:47.040467 | orchestrator | 2026-03-18 04:29:47.040481 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:29:47.040495 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:47.040507 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:47.040518 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:29:47.040529 | orchestrator | 2026-03-18 04:29:47.040540 | orchestrator | 2026-03-18 04:29:47.040551 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:29:47.040561 | orchestrator | Wednesday 18 March 2026 04:29:46 +0000 (0:00:10.418) 0:00:35.946 ******* 2026-03-18 04:29:47.040572 | orchestrator | =============================================================================== 2026-03-18 04:29:47.040583 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.42s 2026-03-18 04:29:47.040594 | orchestrator | redis : Restart redis container ----------------------------------------- 9.92s 2026-03-18 04:29:47.040604 | orchestrator | redis : Copying over redis config files --------------------------------- 3.79s 2026-03-18 04:29:47.040623 | orchestrator | redis : Copying over default config.json files -------------------------- 2.14s 2026-03-18 04:29:47.040634 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.02s 2026-03-18 04:29:47.040644 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.46s 2026-03-18 04:29:47.040661 | orchestrator | redis : include_tasks --------------------------------------------------- 1.25s 2026-03-18 04:29:47.040672 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-03-18 04:29:47.040683 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-03-18 04:29:47.040693 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2026-03-18 04:29:47.040704 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.57s 2026-03-18 04:29:47.040715 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2026-03-18 04:29:47.361862 | orchestrator | + osism apply -a upgrade mariadb 2026-03-18 04:29:49.607506 | orchestrator | 2026-03-18 04:29:49 | INFO  | Task 6f656821-e7ec-41f8-a2c4-03990d501285 (mariadb) was prepared for execution. 2026-03-18 04:29:49.607610 | orchestrator | 2026-03-18 04:29:49 | INFO  | It takes a moment until task 6f656821-e7ec-41f8-a2c4-03990d501285 (mariadb) has been started and output is visible here. 2026-03-18 04:30:05.796855 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:30:05.796942 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:30:05.796964 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:30:05.796974 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:30:05.796991 | orchestrator | 2026-03-18 04:30:05.797001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:30:05.797009 | orchestrator | 2026-03-18 04:30:05.797018 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:30:05.797027 | orchestrator | Wednesday 18 March 2026 04:29:56 +0000 (0:00:02.047) 0:00:02.047 ******* 2026-03-18 04:30:05.797036 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:30:05.797046 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:30:05.797054 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:30:05.797063 | orchestrator | 2026-03-18 04:30:05.797072 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:30:05.797081 | orchestrator | Wednesday 18 March 2026 04:29:57 +0000 (0:00:00.916) 0:00:02.963 ******* 2026-03-18 04:30:05.797089 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-18 04:30:05.797099 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-18 04:30:05.797107 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-18 04:30:05.797116 | orchestrator | 2026-03-18 04:30:05.797125 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-18 04:30:05.797134 | orchestrator | 2026-03-18 04:30:05.797143 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-18 04:30:05.797152 | orchestrator | Wednesday 18 March 2026 04:29:58 +0000 (0:00:01.255) 0:00:04.219 ******* 2026-03-18 04:30:05.797161 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:30:05.797169 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 04:30:05.797178 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 04:30:05.797187 | orchestrator | 2026-03-18 04:30:05.797196 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 04:30:05.797205 | orchestrator | Wednesday 18 March 2026 04:29:58 +0000 (0:00:00.450) 0:00:04.670 ******* 2026-03-18 04:30:05.797231 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:30:05.797241 | orchestrator | 2026-03-18 04:30:05.797250 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-18 04:30:05.797259 | orchestrator | Wednesday 18 March 2026 04:30:00 +0000 (0:00:01.152) 0:00:05.822 ******* 2026-03-18 04:30:05.797286 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:05.797326 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:05.797363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:05.797379 | orchestrator | 2026-03-18 04:30:05.797394 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-18 04:30:05.797410 | orchestrator | Wednesday 18 March 2026 04:30:03 +0000 (0:00:03.962) 0:00:09.785 ******* 2026-03-18 04:30:05.797484 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:05.797502 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:05.797518 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:30:05.797530 | orchestrator | 2026-03-18 04:30:05.797540 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-18 04:30:05.797551 | orchestrator | Wednesday 18 March 2026 04:30:04 +0000 (0:00:00.602) 0:00:10.387 ******* 2026-03-18 04:30:05.797561 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:05.797572 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:05.797582 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:30:05.797593 | orchestrator | 2026-03-18 04:30:05.797603 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-18 04:30:05.797621 | orchestrator | Wednesday 18 March 2026 04:30:05 +0000 (0:00:01.207) 0:00:11.595 ******* 2026-03-18 04:30:17.740548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:17.740707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:17.740748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:17.740771 | orchestrator | 2026-03-18 04:30:17.740784 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-18 04:30:17.740796 | orchestrator | Wednesday 18 March 2026 04:30:08 +0000 (0:00:03.109) 0:00:14.704 ******* 2026-03-18 04:30:17.740807 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:17.740819 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:17.740830 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:30:17.740842 | orchestrator | 2026-03-18 04:30:17.740853 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-18 04:30:17.740864 | orchestrator | Wednesday 18 March 2026 04:30:09 +0000 (0:00:01.011) 0:00:15.716 ******* 2026-03-18 04:30:17.740874 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:30:17.740885 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:30:17.740896 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:30:17.740906 | orchestrator | 2026-03-18 04:30:17.740917 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 04:30:17.740928 | orchestrator | Wednesday 18 March 2026 04:30:13 +0000 (0:00:03.861) 0:00:19.577 ******* 2026-03-18 04:30:17.740939 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:30:17.740950 | orchestrator | 2026-03-18 04:30:17.740961 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-18 04:30:17.740972 | orchestrator | Wednesday 18 March 2026 04:30:14 +0000 (0:00:01.173) 0:00:20.751 ******* 2026-03-18 04:30:17.740997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:20.302008 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:20.302216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:20.302285 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:20.302330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:20.302354 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:20.302375 | orchestrator | 2026-03-18 04:30:20.302396 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-18 04:30:20.302417 | orchestrator | Wednesday 18 March 2026 04:30:17 +0000 (0:00:02.789) 0:00:23.541 ******* 2026-03-18 04:30:20.302496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:20.302533 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:20.302562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:20.302584 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:20.302620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:27.261592 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:27.261702 | orchestrator | 2026-03-18 04:30:27.261718 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-18 04:30:27.261732 | orchestrator | Wednesday 18 March 2026 04:30:20 +0000 (0:00:02.558) 0:00:26.099 ******* 2026-03-18 04:30:27.261765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:27.261782 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:27.261794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:27.261827 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:27.261859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:27.261878 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:27.261890 | orchestrator | 2026-03-18 04:30:27.261901 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-18 04:30:27.261912 | orchestrator | Wednesday 18 March 2026 04:30:23 +0000 (0:00:03.537) 0:00:29.637 ******* 2026-03-18 04:30:27.261923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:27.261954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:31.108538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-18 04:30:31.108713 | orchestrator | 2026-03-18 04:30:31.108747 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-18 04:30:31.108770 | orchestrator | Wednesday 18 March 2026 04:30:27 +0000 (0:00:03.428) 0:00:33.066 ******* 2026-03-18 04:30:31.108791 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:30:31.108813 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:30:31.108833 | orchestrator | } 2026-03-18 04:30:31.108886 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:30:31.108905 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:30:31.108925 | orchestrator | } 2026-03-18 04:30:31.108945 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:30:31.108965 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:30:31.108985 | orchestrator | } 2026-03-18 04:30:31.109006 | orchestrator | 2026-03-18 04:30:31.109027 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:30:31.109047 | orchestrator | Wednesday 18 March 2026 04:30:27 +0000 (0:00:00.372) 0:00:33.439 ******* 2026-03-18 04:30:31.109097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:31.109147 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:31.109170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:31.109192 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:31.109214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:31.109237 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:31.109257 | orchestrator | 2026-03-18 04:30:31.109278 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-18 04:30:31.109329 | orchestrator | Wednesday 18 March 2026 04:30:31 +0000 (0:00:03.460) 0:00:36.899 ******* 2026-03-18 04:30:40.572982 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573087 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573100 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573112 | orchestrator | 2026-03-18 04:30:40.573138 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-18 04:30:40.573149 | orchestrator | Wednesday 18 March 2026 04:30:31 +0000 (0:00:00.372) 0:00:37.272 ******* 2026-03-18 04:30:40.573159 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573169 | orchestrator | 2026-03-18 04:30:40.573179 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-18 04:30:40.573189 | orchestrator | Wednesday 18 March 2026 04:30:31 +0000 (0:00:00.135) 0:00:37.407 ******* 2026-03-18 04:30:40.573198 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573208 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573218 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573227 | orchestrator | 2026-03-18 04:30:40.573237 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-18 04:30:40.573246 | orchestrator | Wednesday 18 March 2026 04:30:31 +0000 (0:00:00.366) 0:00:37.773 ******* 2026-03-18 04:30:40.573256 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573266 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573275 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573285 | orchestrator | 2026-03-18 04:30:40.573294 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-18 04:30:40.573304 | orchestrator | Wednesday 18 March 2026 04:30:32 +0000 (0:00:00.584) 0:00:38.358 ******* 2026-03-18 04:30:40.573314 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573323 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573333 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573343 | orchestrator | 2026-03-18 04:30:40.573352 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-18 04:30:40.573362 | orchestrator | Wednesday 18 March 2026 04:30:32 +0000 (0:00:00.397) 0:00:38.755 ******* 2026-03-18 04:30:40.573372 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573382 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573392 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573401 | orchestrator | 2026-03-18 04:30:40.573411 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-18 04:30:40.573421 | orchestrator | Wednesday 18 March 2026 04:30:33 +0000 (0:00:00.355) 0:00:39.110 ******* 2026-03-18 04:30:40.573430 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573488 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573500 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573512 | orchestrator | 2026-03-18 04:30:40.573524 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-18 04:30:40.573536 | orchestrator | Wednesday 18 March 2026 04:30:33 +0000 (0:00:00.358) 0:00:39.469 ******* 2026-03-18 04:30:40.573548 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573560 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573571 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573581 | orchestrator | 2026-03-18 04:30:40.573592 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-18 04:30:40.573602 | orchestrator | Wednesday 18 March 2026 04:30:34 +0000 (0:00:00.599) 0:00:40.068 ******* 2026-03-18 04:30:40.573612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:30:40.573623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:30:40.573634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:30:40.573644 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:30:40.573665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:30:40.573699 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:30:40.573710 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573720 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:30:40.573730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:30:40.573740 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:30:40.573750 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573761 | orchestrator | 2026-03-18 04:30:40.573771 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-18 04:30:40.573781 | orchestrator | Wednesday 18 March 2026 04:30:34 +0000 (0:00:00.466) 0:00:40.535 ******* 2026-03-18 04:30:40.573791 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573801 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573812 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573822 | orchestrator | 2026-03-18 04:30:40.573832 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-18 04:30:40.573843 | orchestrator | Wednesday 18 March 2026 04:30:35 +0000 (0:00:00.393) 0:00:40.928 ******* 2026-03-18 04:30:40.573853 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573864 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573874 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573884 | orchestrator | 2026-03-18 04:30:40.573895 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-18 04:30:40.573904 | orchestrator | Wednesday 18 March 2026 04:30:35 +0000 (0:00:00.580) 0:00:41.509 ******* 2026-03-18 04:30:40.573914 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573924 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573934 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.573945 | orchestrator | 2026-03-18 04:30:40.573955 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-18 04:30:40.573965 | orchestrator | Wednesday 18 March 2026 04:30:36 +0000 (0:00:00.357) 0:00:41.866 ******* 2026-03-18 04:30:40.573975 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.573986 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.573996 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.574006 | orchestrator | 2026-03-18 04:30:40.574071 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-18 04:30:40.574099 | orchestrator | Wednesday 18 March 2026 04:30:36 +0000 (0:00:00.334) 0:00:42.201 ******* 2026-03-18 04:30:40.574110 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.574120 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.574129 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.574139 | orchestrator | 2026-03-18 04:30:40.574154 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-18 04:30:40.574165 | orchestrator | Wednesday 18 March 2026 04:30:36 +0000 (0:00:00.367) 0:00:42.568 ******* 2026-03-18 04:30:40.574176 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.574186 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.574197 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.574207 | orchestrator | 2026-03-18 04:30:40.574218 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-18 04:30:40.574228 | orchestrator | Wednesday 18 March 2026 04:30:37 +0000 (0:00:00.546) 0:00:43.114 ******* 2026-03-18 04:30:40.574238 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.574247 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.574257 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.574267 | orchestrator | 2026-03-18 04:30:40.574277 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-18 04:30:40.574287 | orchestrator | Wednesday 18 March 2026 04:30:37 +0000 (0:00:00.352) 0:00:43.466 ******* 2026-03-18 04:30:40.574297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.574306 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:40.574324 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:40.574335 | orchestrator | 2026-03-18 04:30:40.574344 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-18 04:30:40.574354 | orchestrator | Wednesday 18 March 2026 04:30:38 +0000 (0:00:00.377) 0:00:43.844 ******* 2026-03-18 04:30:40.574371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:40.574385 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:40.574410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:43.745203 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:43.745313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:43.745332 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:43.745344 | orchestrator | 2026-03-18 04:30:43.745357 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-18 04:30:43.745369 | orchestrator | Wednesday 18 March 2026 04:30:40 +0000 (0:00:02.532) 0:00:46.376 ******* 2026-03-18 04:30:43.745380 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:43.745391 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:43.745402 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:43.745412 | orchestrator | 2026-03-18 04:30:43.745424 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-18 04:30:43.745435 | orchestrator | Wednesday 18 March 2026 04:30:41 +0000 (0:00:00.560) 0:00:46.937 ******* 2026-03-18 04:30:43.745570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:43.745609 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:30:43.745621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:43.745633 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:30:43.745650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-18 04:30:43.745670 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:30:43.745681 | orchestrator | 2026-03-18 04:30:43.745692 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-18 04:30:43.745703 | orchestrator | Wednesday 18 March 2026 04:30:43 +0000 (0:00:02.414) 0:00:49.351 ******* 2026-03-18 04:30:43.745720 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.872880 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.872998 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873015 | orchestrator | 2026-03-18 04:32:41.873028 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-18 04:32:41.873041 | orchestrator | Wednesday 18 March 2026 04:30:44 +0000 (0:00:00.724) 0:00:50.076 ******* 2026-03-18 04:32:41.873052 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873064 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873075 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873086 | orchestrator | 2026-03-18 04:32:41.873097 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-18 04:32:41.873109 | orchestrator | Wednesday 18 March 2026 04:30:44 +0000 (0:00:00.616) 0:00:50.692 ******* 2026-03-18 04:32:41.873120 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873132 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873143 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873154 | orchestrator | 2026-03-18 04:32:41.873165 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-18 04:32:41.873176 | orchestrator | Wednesday 18 March 2026 04:30:45 +0000 (0:00:00.398) 0:00:51.091 ******* 2026-03-18 04:32:41.873187 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873198 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873209 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873219 | orchestrator | 2026-03-18 04:32:41.873230 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-18 04:32:41.873241 | orchestrator | Wednesday 18 March 2026 04:30:46 +0000 (0:00:00.937) 0:00:52.028 ******* 2026-03-18 04:32:41.873252 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873263 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873274 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873285 | orchestrator | 2026-03-18 04:32:41.873296 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-18 04:32:41.873307 | orchestrator | Wednesday 18 March 2026 04:30:47 +0000 (0:00:00.916) 0:00:52.945 ******* 2026-03-18 04:32:41.873318 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873330 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873340 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873351 | orchestrator | 2026-03-18 04:32:41.873362 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-18 04:32:41.873373 | orchestrator | Wednesday 18 March 2026 04:30:48 +0000 (0:00:00.895) 0:00:53.841 ******* 2026-03-18 04:32:41.873384 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873395 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873406 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873417 | orchestrator | 2026-03-18 04:32:41.873428 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-18 04:32:41.873439 | orchestrator | Wednesday 18 March 2026 04:30:48 +0000 (0:00:00.398) 0:00:54.240 ******* 2026-03-18 04:32:41.873450 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873460 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873471 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873482 | orchestrator | 2026-03-18 04:32:41.873493 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-18 04:32:41.873504 | orchestrator | Wednesday 18 March 2026 04:30:48 +0000 (0:00:00.357) 0:00:54.597 ******* 2026-03-18 04:32:41.873562 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873575 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873585 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873596 | orchestrator | 2026-03-18 04:32:41.873607 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-18 04:32:41.873618 | orchestrator | Wednesday 18 March 2026 04:30:49 +0000 (0:00:01.062) 0:00:55.660 ******* 2026-03-18 04:32:41.873628 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873639 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873650 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873661 | orchestrator | 2026-03-18 04:32:41.873672 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-18 04:32:41.873682 | orchestrator | Wednesday 18 March 2026 04:30:50 +0000 (0:00:00.391) 0:00:56.051 ******* 2026-03-18 04:32:41.873693 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873704 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873715 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.873725 | orchestrator | 2026-03-18 04:32:41.873737 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-18 04:32:41.873748 | orchestrator | Wednesday 18 March 2026 04:30:50 +0000 (0:00:00.388) 0:00:56.440 ******* 2026-03-18 04:32:41.873758 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873769 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873780 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873791 | orchestrator | 2026-03-18 04:32:41.873801 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-18 04:32:41.873812 | orchestrator | Wednesday 18 March 2026 04:30:53 +0000 (0:00:02.389) 0:00:58.829 ******* 2026-03-18 04:32:41.873823 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873834 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873844 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873855 | orchestrator | 2026-03-18 04:32:41.873866 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-18 04:32:41.873877 | orchestrator | Wednesday 18 March 2026 04:30:53 +0000 (0:00:00.611) 0:00:59.441 ******* 2026-03-18 04:32:41.873887 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.873913 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.873925 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.873935 | orchestrator | 2026-03-18 04:32:41.873946 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-18 04:32:41.873957 | orchestrator | Wednesday 18 March 2026 04:30:53 +0000 (0:00:00.338) 0:00:59.780 ******* 2026-03-18 04:32:41.873968 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.873979 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.873989 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.874000 | orchestrator | 2026-03-18 04:32:41.874011 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 04:32:41.874081 | orchestrator | Wednesday 18 March 2026 04:30:54 +0000 (0:00:00.764) 0:01:00.544 ******* 2026-03-18 04:32:41.874092 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.874103 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.874114 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.874142 | orchestrator | 2026-03-18 04:32:41.874154 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-18 04:32:41.874165 | orchestrator | Wednesday 18 March 2026 04:30:55 +0000 (0:00:00.696) 0:01:01.241 ******* 2026-03-18 04:32:41.874175 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.874187 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:32:41.874198 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:32:41.874219 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.874230 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.874251 | orchestrator | 2026-03-18 04:32:41.874262 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-18 04:32:41.874273 | orchestrator | Wednesday 18 March 2026 04:30:56 +0000 (0:00:00.852) 0:01:02.093 ******* 2026-03-18 04:32:41.874284 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:32:41.874295 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:32:41.874305 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:32:41.874316 | orchestrator | 2026-03-18 04:32:41.874327 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-18 04:32:41.874337 | orchestrator | Wednesday 18 March 2026 04:30:56 +0000 (0:00:00.701) 0:01:02.795 ******* 2026-03-18 04:32:41.874348 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:32:41.874359 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.874370 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.874380 | orchestrator | 2026-03-18 04:32:41.874391 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-18 04:32:41.874402 | orchestrator | 2026-03-18 04:32:41.874412 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 04:32:41.874423 | orchestrator | Wednesday 18 March 2026 04:30:57 +0000 (0:00:00.988) 0:01:03.783 ******* 2026-03-18 04:32:41.874434 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:32:41.874445 | orchestrator | 2026-03-18 04:32:41.874455 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 04:32:41.874466 | orchestrator | Wednesday 18 March 2026 04:31:23 +0000 (0:00:25.073) 0:01:28.857 ******* 2026-03-18 04:32:41.874477 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.874487 | orchestrator | 2026-03-18 04:32:41.874498 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 04:32:41.874509 | orchestrator | Wednesday 18 March 2026 04:31:28 +0000 (0:00:05.615) 0:01:34.472 ******* 2026-03-18 04:32:41.874563 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:32:41.874576 | orchestrator | 2026-03-18 04:32:41.874586 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-18 04:32:41.874597 | orchestrator | 2026-03-18 04:32:41.874608 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 04:32:41.874619 | orchestrator | Wednesday 18 March 2026 04:31:31 +0000 (0:00:02.716) 0:01:37.189 ******* 2026-03-18 04:32:41.874629 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:32:41.874640 | orchestrator | 2026-03-18 04:32:41.874651 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 04:32:41.874662 | orchestrator | Wednesday 18 March 2026 04:31:56 +0000 (0:00:25.065) 0:02:02.254 ******* 2026-03-18 04:32:41.874673 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.874683 | orchestrator | 2026-03-18 04:32:41.874694 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 04:32:41.874705 | orchestrator | Wednesday 18 March 2026 04:32:01 +0000 (0:00:04.668) 0:02:06.922 ******* 2026-03-18 04:32:41.874716 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:32:41.874726 | orchestrator | 2026-03-18 04:32:41.874737 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-18 04:32:41.874748 | orchestrator | 2026-03-18 04:32:41.874759 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-18 04:32:41.874769 | orchestrator | Wednesday 18 March 2026 04:32:04 +0000 (0:00:03.175) 0:02:10.097 ******* 2026-03-18 04:32:41.874780 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:32:41.874791 | orchestrator | 2026-03-18 04:32:41.874802 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-18 04:32:41.874813 | orchestrator | Wednesday 18 March 2026 04:32:29 +0000 (0:00:25.428) 0:02:35.526 ******* 2026-03-18 04:32:41.874824 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.874834 | orchestrator | 2026-03-18 04:32:41.874845 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-18 04:32:41.874856 | orchestrator | Wednesday 18 March 2026 04:32:35 +0000 (0:00:05.610) 0:02:41.137 ******* 2026-03-18 04:32:41.874873 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-18 04:32:41.874884 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-18 04:32:41.874895 | orchestrator | mariadb_bootstrap_restart 2026-03-18 04:32:41.874906 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:32:41.874917 | orchestrator | 2026-03-18 04:32:41.874927 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-18 04:32:41.874938 | orchestrator | skipping: no hosts matched 2026-03-18 04:32:41.874949 | orchestrator | 2026-03-18 04:32:41.874960 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-18 04:32:41.874976 | orchestrator | skipping: no hosts matched 2026-03-18 04:32:41.874987 | orchestrator | 2026-03-18 04:32:41.874998 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-18 04:32:41.875009 | orchestrator | 2026-03-18 04:32:41.875019 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-18 04:32:41.875030 | orchestrator | Wednesday 18 March 2026 04:32:38 +0000 (0:00:03.227) 0:02:44.364 ******* 2026-03-18 04:32:41.875041 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:32:41.875051 | orchestrator | 2026-03-18 04:32:41.875062 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-18 04:32:41.875073 | orchestrator | Wednesday 18 March 2026 04:32:39 +0000 (0:00:01.118) 0:02:45.482 ******* 2026-03-18 04:32:41.875083 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:32:41.875094 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:32:41.875114 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:33:19.643201 | orchestrator | 2026-03-18 04:33:19.643321 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-18 04:33:19.643338 | orchestrator | Wednesday 18 March 2026 04:32:41 +0000 (0:00:02.191) 0:02:47.674 ******* 2026-03-18 04:33:19.643351 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:33:19.643364 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:33:19.643375 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:33:19.643387 | orchestrator | 2026-03-18 04:33:19.643398 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-18 04:33:19.643409 | orchestrator | Wednesday 18 March 2026 04:32:44 +0000 (0:00:02.234) 0:02:49.909 ******* 2026-03-18 04:33:19.643420 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:33:19.643431 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:33:19.643442 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:33:19.643454 | orchestrator | 2026-03-18 04:33:19.643465 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-18 04:33:19.643475 | orchestrator | Wednesday 18 March 2026 04:32:46 +0000 (0:00:02.097) 0:02:52.006 ******* 2026-03-18 04:33:19.643489 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:33:19.643507 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:33:19.643526 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:33:19.643537 | orchestrator | 2026-03-18 04:33:19.643624 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-18 04:33:19.643638 | orchestrator | Wednesday 18 March 2026 04:32:48 +0000 (0:00:02.352) 0:02:54.358 ******* 2026-03-18 04:33:19.643649 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:33:19.643659 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:33:19.643670 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:33:19.643681 | orchestrator | 2026-03-18 04:33:19.643692 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-18 04:33:19.643703 | orchestrator | Wednesday 18 March 2026 04:32:54 +0000 (0:00:05.471) 0:02:59.829 ******* 2026-03-18 04:33:19.643716 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:33:19.643728 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:33:19.643740 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:33:19.643753 | orchestrator | 2026-03-18 04:33:19.643765 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-18 04:33:19.643778 | orchestrator | Wednesday 18 March 2026 04:32:56 +0000 (0:00:02.708) 0:03:02.538 ******* 2026-03-18 04:33:19.643816 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:33:19.643829 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:33:19.643841 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:33:19.643854 | orchestrator | 2026-03-18 04:33:19.643866 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-18 04:33:19.643878 | orchestrator | Wednesday 18 March 2026 04:32:57 +0000 (0:00:00.880) 0:03:03.419 ******* 2026-03-18 04:33:19.643890 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:33:19.643903 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:33:19.643915 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:33:19.643928 | orchestrator | 2026-03-18 04:33:19.643955 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-18 04:33:19.643968 | orchestrator | Wednesday 18 March 2026 04:33:00 +0000 (0:00:02.751) 0:03:06.171 ******* 2026-03-18 04:33:19.643980 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:33:19.644005 | orchestrator | 2026-03-18 04:33:19.644017 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-18 04:33:19.644030 | orchestrator | Wednesday 18 March 2026 04:33:01 +0000 (0:00:01.292) 0:03:07.463 ******* 2026-03-18 04:33:19.644042 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:33:19.644054 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:33:19.644066 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:33:19.644077 | orchestrator | 2026-03-18 04:33:19.644088 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:33:19.644100 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-18 04:33:19.644112 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-18 04:33:19.644123 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-18 04:33:19.644133 | orchestrator | 2026-03-18 04:33:19.644144 | orchestrator | 2026-03-18 04:33:19.644155 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:33:19.644166 | orchestrator | Wednesday 18 March 2026 04:33:19 +0000 (0:00:17.510) 0:03:24.974 ******* 2026-03-18 04:33:19.644176 | orchestrator | =============================================================================== 2026-03-18 04:33:19.644187 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 75.57s 2026-03-18 04:33:19.644197 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.51s 2026-03-18 04:33:19.644208 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 15.89s 2026-03-18 04:33:19.644234 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.12s 2026-03-18 04:33:19.644245 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.47s 2026-03-18 04:33:19.644255 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.96s 2026-03-18 04:33:19.644266 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.86s 2026-03-18 04:33:19.644276 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.54s 2026-03-18 04:33:19.644287 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.46s 2026-03-18 04:33:19.644297 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.43s 2026-03-18 04:33:19.644327 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.11s 2026-03-18 04:33:19.644338 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.79s 2026-03-18 04:33:19.644349 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.75s 2026-03-18 04:33:19.644360 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.71s 2026-03-18 04:33:19.644379 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.56s 2026-03-18 04:33:19.644389 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.53s 2026-03-18 04:33:19.644400 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.41s 2026-03-18 04:33:19.644410 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.39s 2026-03-18 04:33:19.644421 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.35s 2026-03-18 04:33:19.644431 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.23s 2026-03-18 04:33:19.955237 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-18 04:33:22.127272 | orchestrator | 2026-03-18 04:33:22 | INFO  | Task f16d8138-6eae-4437-85a9-b51609d4f4c6 (rabbitmq) was prepared for execution. 2026-03-18 04:33:22.127409 | orchestrator | 2026-03-18 04:33:22 | INFO  | It takes a moment until task f16d8138-6eae-4437-85a9-b51609d4f4c6 (rabbitmq) has been started and output is visible here. 2026-03-18 04:34:06.619660 | orchestrator | 2026-03-18 04:34:06.619780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:34:06.619801 | orchestrator | 2026-03-18 04:34:06.619814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:34:06.619826 | orchestrator | Wednesday 18 March 2026 04:33:27 +0000 (0:00:01.346) 0:00:01.346 ******* 2026-03-18 04:34:06.619837 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:06.619850 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:34:06.619860 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:34:06.619871 | orchestrator | 2026-03-18 04:34:06.619883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:34:06.619893 | orchestrator | Wednesday 18 March 2026 04:33:29 +0000 (0:00:01.905) 0:00:03.251 ******* 2026-03-18 04:34:06.619904 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-18 04:34:06.619916 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-18 04:34:06.619927 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-18 04:34:06.619938 | orchestrator | 2026-03-18 04:34:06.619949 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-18 04:34:06.619959 | orchestrator | 2026-03-18 04:34:06.619970 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 04:34:06.619981 | orchestrator | Wednesday 18 March 2026 04:33:31 +0000 (0:00:01.934) 0:00:05.185 ******* 2026-03-18 04:34:06.619993 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:34:06.620004 | orchestrator | 2026-03-18 04:34:06.620016 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-18 04:34:06.620026 | orchestrator | Wednesday 18 March 2026 04:33:34 +0000 (0:00:02.786) 0:00:07.972 ******* 2026-03-18 04:34:06.620037 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:06.620048 | orchestrator | 2026-03-18 04:34:06.620059 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-18 04:34:06.620070 | orchestrator | Wednesday 18 March 2026 04:33:36 +0000 (0:00:02.342) 0:00:10.314 ******* 2026-03-18 04:34:06.620081 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:06.620092 | orchestrator | 2026-03-18 04:34:06.620103 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-18 04:34:06.620113 | orchestrator | Wednesday 18 March 2026 04:33:40 +0000 (0:00:03.271) 0:00:13.586 ******* 2026-03-18 04:34:06.620124 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:34:06.620136 | orchestrator | 2026-03-18 04:34:06.620149 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-18 04:34:06.620162 | orchestrator | Wednesday 18 March 2026 04:33:49 +0000 (0:00:09.852) 0:00:23.439 ******* 2026-03-18 04:34:06.620174 | orchestrator | ok: [testbed-node-0] => { 2026-03-18 04:34:06.620187 | orchestrator |  "changed": false, 2026-03-18 04:34:06.620225 | orchestrator |  "msg": "All assertions passed" 2026-03-18 04:34:06.620239 | orchestrator | } 2026-03-18 04:34:06.620252 | orchestrator | 2026-03-18 04:34:06.620264 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-18 04:34:06.620277 | orchestrator | Wednesday 18 March 2026 04:33:51 +0000 (0:00:01.456) 0:00:24.895 ******* 2026-03-18 04:34:06.620289 | orchestrator | ok: [testbed-node-0] => { 2026-03-18 04:34:06.620301 | orchestrator |  "changed": false, 2026-03-18 04:34:06.620314 | orchestrator |  "msg": "All assertions passed" 2026-03-18 04:34:06.620327 | orchestrator | } 2026-03-18 04:34:06.620339 | orchestrator | 2026-03-18 04:34:06.620351 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 04:34:06.620379 | orchestrator | Wednesday 18 March 2026 04:33:53 +0000 (0:00:01.704) 0:00:26.600 ******* 2026-03-18 04:34:06.620392 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:34:06.620405 | orchestrator | 2026-03-18 04:34:06.620418 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-18 04:34:06.620430 | orchestrator | Wednesday 18 March 2026 04:33:54 +0000 (0:00:01.737) 0:00:28.337 ******* 2026-03-18 04:34:06.620443 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:06.620454 | orchestrator | 2026-03-18 04:34:06.620467 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-18 04:34:06.620479 | orchestrator | Wednesday 18 March 2026 04:33:57 +0000 (0:00:02.410) 0:00:30.747 ******* 2026-03-18 04:34:06.620491 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:06.620503 | orchestrator | 2026-03-18 04:34:06.620516 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-18 04:34:06.620527 | orchestrator | Wednesday 18 March 2026 04:34:00 +0000 (0:00:03.017) 0:00:33.765 ******* 2026-03-18 04:34:06.620537 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:34:06.620548 | orchestrator | 2026-03-18 04:34:06.620559 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-18 04:34:06.620570 | orchestrator | Wednesday 18 March 2026 04:34:02 +0000 (0:00:01.931) 0:00:35.696 ******* 2026-03-18 04:34:06.620629 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:06.620647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:06.620675 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:06.620688 | orchestrator | 2026-03-18 04:34:06.620699 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-18 04:34:06.620710 | orchestrator | Wednesday 18 March 2026 04:34:04 +0000 (0:00:01.805) 0:00:37.502 ******* 2026-03-18 04:34:06.620721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:06.620743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:26.010462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:26.010647 | orchestrator | 2026-03-18 04:34:26.010664 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-18 04:34:26.010673 | orchestrator | Wednesday 18 March 2026 04:34:06 +0000 (0:00:02.566) 0:00:40.068 ******* 2026-03-18 04:34:26.010679 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 04:34:26.010686 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 04:34:26.010693 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-18 04:34:26.010700 | orchestrator | 2026-03-18 04:34:26.010707 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-18 04:34:26.010713 | orchestrator | Wednesday 18 March 2026 04:34:09 +0000 (0:00:02.446) 0:00:42.516 ******* 2026-03-18 04:34:26.010734 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 04:34:26.010741 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 04:34:26.010748 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-18 04:34:26.010756 | orchestrator | 2026-03-18 04:34:26.010763 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-18 04:34:26.010769 | orchestrator | Wednesday 18 March 2026 04:34:12 +0000 (0:00:03.003) 0:00:45.519 ******* 2026-03-18 04:34:26.010776 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 04:34:26.010783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 04:34:26.010789 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-18 04:34:26.010795 | orchestrator | 2026-03-18 04:34:26.010801 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-18 04:34:26.010808 | orchestrator | Wednesday 18 March 2026 04:34:14 +0000 (0:00:02.362) 0:00:47.881 ******* 2026-03-18 04:34:26.010815 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 04:34:26.010822 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 04:34:26.010828 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-18 04:34:26.010835 | orchestrator | 2026-03-18 04:34:26.010842 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-18 04:34:26.010850 | orchestrator | Wednesday 18 March 2026 04:34:16 +0000 (0:00:02.477) 0:00:50.359 ******* 2026-03-18 04:34:26.010857 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 04:34:26.010864 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 04:34:26.010870 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-18 04:34:26.010877 | orchestrator | 2026-03-18 04:34:26.010884 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-18 04:34:26.010899 | orchestrator | Wednesday 18 March 2026 04:34:19 +0000 (0:00:02.269) 0:00:52.628 ******* 2026-03-18 04:34:26.010906 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 04:34:26.010912 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 04:34:26.010918 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-18 04:34:26.010926 | orchestrator | 2026-03-18 04:34:26.010932 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-18 04:34:26.010940 | orchestrator | Wednesday 18 March 2026 04:34:21 +0000 (0:00:02.542) 0:00:55.171 ******* 2026-03-18 04:34:26.010946 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:34:26.010954 | orchestrator | 2026-03-18 04:34:26.010977 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-18 04:34:26.010983 | orchestrator | Wednesday 18 March 2026 04:34:23 +0000 (0:00:01.703) 0:00:56.874 ******* 2026-03-18 04:34:26.010992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:26.011004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:26.011014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:26.011027 | orchestrator | 2026-03-18 04:34:26.011034 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-18 04:34:26.011042 | orchestrator | Wednesday 18 March 2026 04:34:25 +0000 (0:00:02.419) 0:00:59.294 ******* 2026-03-18 04:34:26.011058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414227 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:34:35.414344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414366 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:34:35.414396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414431 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:34:35.414443 | orchestrator | 2026-03-18 04:34:35.414455 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-18 04:34:35.414467 | orchestrator | Wednesday 18 March 2026 04:34:27 +0000 (0:00:01.557) 0:01:00.851 ******* 2026-03-18 04:34:35.414480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414492 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:34:35.414521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414533 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:34:35.414551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:34:35.414563 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:34:35.414574 | orchestrator | 2026-03-18 04:34:35.414585 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-18 04:34:35.414640 | orchestrator | Wednesday 18 March 2026 04:34:29 +0000 (0:00:01.861) 0:01:02.713 ******* 2026-03-18 04:34:35.414661 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:34:35.414672 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:34:35.414683 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:34:35.414694 | orchestrator | 2026-03-18 04:34:35.414704 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-18 04:34:35.414715 | orchestrator | Wednesday 18 March 2026 04:34:33 +0000 (0:00:03.821) 0:01:06.534 ******* 2026-03-18 04:34:35.414726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:34:35.414748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:36:18.938753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-18 04:36:18.938909 | orchestrator | 2026-03-18 04:36:18.938948 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-18 04:36:18.938962 | orchestrator | Wednesday 18 March 2026 04:34:35 +0000 (0:00:02.335) 0:01:08.870 ******* 2026-03-18 04:36:18.938974 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:36:18.939014 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:18.939026 | orchestrator | } 2026-03-18 04:36:18.939037 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:36:18.939048 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:18.939059 | orchestrator | } 2026-03-18 04:36:18.939069 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:36:18.939080 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:18.939090 | orchestrator | } 2026-03-18 04:36:18.939101 | orchestrator | 2026-03-18 04:36:18.939113 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:36:18.939124 | orchestrator | Wednesday 18 March 2026 04:34:36 +0000 (0:00:01.369) 0:01:10.239 ******* 2026-03-18 04:36:18.939137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:36:18.939149 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:36:18.939161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:36:18.939173 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:36:18.939207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-18 04:36:18.939232 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:36:18.939245 | orchestrator | 2026-03-18 04:36:18.939258 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-18 04:36:18.939270 | orchestrator | Wednesday 18 March 2026 04:34:38 +0000 (0:00:02.060) 0:01:12.299 ******* 2026-03-18 04:36:18.939283 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:36:18.939296 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:36:18.939308 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:36:18.939320 | orchestrator | 2026-03-18 04:36:18.939333 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 04:36:18.939345 | orchestrator | 2026-03-18 04:36:18.939358 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 04:36:18.939371 | orchestrator | Wednesday 18 March 2026 04:34:40 +0000 (0:00:01.746) 0:01:14.046 ******* 2026-03-18 04:36:18.939384 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:36:18.939398 | orchestrator | 2026-03-18 04:36:18.939451 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 04:36:18.939465 | orchestrator | Wednesday 18 March 2026 04:34:42 +0000 (0:00:02.012) 0:01:16.059 ******* 2026-03-18 04:36:18.939477 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:36:18.939490 | orchestrator | 2026-03-18 04:36:18.939502 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 04:36:18.939514 | orchestrator | Wednesday 18 March 2026 04:34:51 +0000 (0:00:08.977) 0:01:25.036 ******* 2026-03-18 04:36:18.939526 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:36:18.939538 | orchestrator | 2026-03-18 04:36:18.939550 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 04:36:18.939563 | orchestrator | Wednesday 18 March 2026 04:35:00 +0000 (0:00:09.099) 0:01:34.135 ******* 2026-03-18 04:36:18.939574 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:36:18.939584 | orchestrator | 2026-03-18 04:36:18.939595 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 04:36:18.939606 | orchestrator | 2026-03-18 04:36:18.939616 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 04:36:18.939627 | orchestrator | Wednesday 18 March 2026 04:35:10 +0000 (0:00:09.604) 0:01:43.740 ******* 2026-03-18 04:36:18.939638 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:36:18.939648 | orchestrator | 2026-03-18 04:36:18.939682 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 04:36:18.939693 | orchestrator | Wednesday 18 March 2026 04:35:12 +0000 (0:00:01.819) 0:01:45.560 ******* 2026-03-18 04:36:18.939704 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:36:18.939715 | orchestrator | 2026-03-18 04:36:18.939726 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 04:36:18.939737 | orchestrator | Wednesday 18 March 2026 04:35:20 +0000 (0:00:08.742) 0:01:54.302 ******* 2026-03-18 04:36:18.939748 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:36:18.939758 | orchestrator | 2026-03-18 04:36:18.939769 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 04:36:18.939780 | orchestrator | Wednesday 18 March 2026 04:35:35 +0000 (0:00:14.432) 0:02:08.735 ******* 2026-03-18 04:36:18.939791 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:36:18.939801 | orchestrator | 2026-03-18 04:36:18.939812 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-18 04:36:18.939823 | orchestrator | 2026-03-18 04:36:18.939834 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-18 04:36:18.939844 | orchestrator | Wednesday 18 March 2026 04:35:44 +0000 (0:00:09.348) 0:02:18.084 ******* 2026-03-18 04:36:18.939855 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:36:18.939866 | orchestrator | 2026-03-18 04:36:18.939877 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-18 04:36:18.939888 | orchestrator | Wednesday 18 March 2026 04:35:46 +0000 (0:00:01.780) 0:02:19.864 ******* 2026-03-18 04:36:18.939907 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:36:18.939917 | orchestrator | 2026-03-18 04:36:18.939928 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-18 04:36:18.939939 | orchestrator | Wednesday 18 March 2026 04:35:55 +0000 (0:00:08.702) 0:02:28.566 ******* 2026-03-18 04:36:18.939950 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:36:18.939960 | orchestrator | 2026-03-18 04:36:18.939971 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-18 04:36:18.939982 | orchestrator | Wednesday 18 March 2026 04:36:09 +0000 (0:00:14.045) 0:02:42.611 ******* 2026-03-18 04:36:18.939993 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:36:18.940008 | orchestrator | 2026-03-18 04:36:18.940027 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-18 04:36:18.940045 | orchestrator | 2026-03-18 04:36:18.940062 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-18 04:36:18.940091 | orchestrator | Wednesday 18 March 2026 04:36:18 +0000 (0:00:09.773) 0:02:52.385 ******* 2026-03-18 04:36:26.573182 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:36:26.573332 | orchestrator | 2026-03-18 04:36:26.573361 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-18 04:36:26.573380 | orchestrator | Wednesday 18 March 2026 04:36:20 +0000 (0:00:01.577) 0:02:53.963 ******* 2026-03-18 04:36:26.573399 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:36:26.573411 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:36:26.573422 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:36:26.573433 | orchestrator | 2026-03-18 04:36:26.573444 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:36:26.573457 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:36:26.573470 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 04:36:26.573481 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-18 04:36:26.573491 | orchestrator | 2026-03-18 04:36:26.573502 | orchestrator | 2026-03-18 04:36:26.573513 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:36:26.573524 | orchestrator | Wednesday 18 March 2026 04:36:26 +0000 (0:00:05.639) 0:02:59.602 ******* 2026-03-18 04:36:26.573555 | orchestrator | =============================================================================== 2026-03-18 04:36:26.573567 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.58s 2026-03-18 04:36:26.573578 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 28.73s 2026-03-18 04:36:26.573589 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 26.42s 2026-03-18 04:36:26.573600 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.85s 2026-03-18 04:36:26.573611 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 5.64s 2026-03-18 04:36:26.573622 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.61s 2026-03-18 04:36:26.573633 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.82s 2026-03-18 04:36:26.573643 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.27s 2026-03-18 04:36:26.573654 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.02s 2026-03-18 04:36:26.573750 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.00s 2026-03-18 04:36:26.573764 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.79s 2026-03-18 04:36:26.573775 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.57s 2026-03-18 04:36:26.573812 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.54s 2026-03-18 04:36:26.573824 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.48s 2026-03-18 04:36:26.573834 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.45s 2026-03-18 04:36:26.573845 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.42s 2026-03-18 04:36:26.573856 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.41s 2026-03-18 04:36:26.573867 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.36s 2026-03-18 04:36:26.573877 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.34s 2026-03-18 04:36:26.573888 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.34s 2026-03-18 04:36:26.908306 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-18 04:36:29.003329 | orchestrator | 2026-03-18 04:36:29 | INFO  | Task 66d15cbd-48a7-4b09-af62-938abf6766d1 (openvswitch) was prepared for execution. 2026-03-18 04:36:29.003456 | orchestrator | 2026-03-18 04:36:29 | INFO  | It takes a moment until task 66d15cbd-48a7-4b09-af62-938abf6766d1 (openvswitch) has been started and output is visible here. 2026-03-18 04:36:46.784250 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:36:46.784375 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:36:46.784405 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:36:46.784416 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:36:46.784438 | orchestrator | 2026-03-18 04:36:46.784450 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:36:46.784462 | orchestrator | 2026-03-18 04:36:46.784473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:36:46.784484 | orchestrator | Wednesday 18 March 2026 04:36:34 +0000 (0:00:01.102) 0:00:01.102 ******* 2026-03-18 04:36:46.784495 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:36:46.784507 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:36:46.784518 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:36:46.784528 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:36:46.784539 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:36:46.784549 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:36:46.784560 | orchestrator | 2026-03-18 04:36:46.784571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:36:46.784581 | orchestrator | Wednesday 18 March 2026 04:36:36 +0000 (0:00:01.696) 0:00:02.798 ******* 2026-03-18 04:36:46.784592 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784603 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784613 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784624 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784635 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784645 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-18 04:36:46.784656 | orchestrator | 2026-03-18 04:36:46.784667 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-18 04:36:46.784713 | orchestrator | 2026-03-18 04:36:46.784724 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-18 04:36:46.784736 | orchestrator | Wednesday 18 March 2026 04:36:37 +0000 (0:00:01.064) 0:00:03.862 ******* 2026-03-18 04:36:46.784748 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:36:46.784786 | orchestrator | 2026-03-18 04:36:46.784814 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-18 04:36:46.784828 | orchestrator | Wednesday 18 March 2026 04:36:39 +0000 (0:00:02.078) 0:00:05.941 ******* 2026-03-18 04:36:46.784841 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-18 04:36:46.784854 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-18 04:36:46.784867 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-18 04:36:46.784879 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-18 04:36:46.784891 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-18 04:36:46.784903 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-18 04:36:46.784915 | orchestrator | 2026-03-18 04:36:46.784928 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-18 04:36:46.784941 | orchestrator | Wednesday 18 March 2026 04:36:40 +0000 (0:00:01.313) 0:00:07.255 ******* 2026-03-18 04:36:46.784954 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-18 04:36:46.784966 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-18 04:36:46.784978 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-18 04:36:46.784990 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-18 04:36:46.785002 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-18 04:36:46.785014 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-18 04:36:46.785026 | orchestrator | 2026-03-18 04:36:46.785038 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-18 04:36:46.785051 | orchestrator | Wednesday 18 March 2026 04:36:42 +0000 (0:00:01.456) 0:00:08.711 ******* 2026-03-18 04:36:46.785064 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-18 04:36:46.785076 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:36:46.785089 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-18 04:36:46.785101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:36:46.785113 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-18 04:36:46.785126 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:36:46.785137 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-18 04:36:46.785147 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:36:46.785158 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-18 04:36:46.785169 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:36:46.785180 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-18 04:36:46.785191 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:36:46.785201 | orchestrator | 2026-03-18 04:36:46.785212 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-18 04:36:46.785223 | orchestrator | Wednesday 18 March 2026 04:36:44 +0000 (0:00:01.885) 0:00:10.597 ******* 2026-03-18 04:36:46.785234 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:36:46.785244 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:36:46.785255 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:36:46.785265 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:36:46.785276 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:36:46.785303 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:36:46.785314 | orchestrator | 2026-03-18 04:36:46.785325 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-18 04:36:46.785336 | orchestrator | Wednesday 18 March 2026 04:36:45 +0000 (0:00:01.076) 0:00:11.673 ******* 2026-03-18 04:36:46.785350 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:46.785378 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:46.785395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:46.785407 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:46.785418 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:46.785440 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.891962 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892070 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892119 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892131 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892160 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892197 | orchestrator | 2026-03-18 04:36:49.892212 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-18 04:36:49.892225 | orchestrator | Wednesday 18 March 2026 04:36:46 +0000 (0:00:01.676) 0:00:13.350 ******* 2026-03-18 04:36:49.892236 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892255 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892267 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892278 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892290 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:49.892318 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406586 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406606 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406632 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406654 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406662 | orchestrator | 2026-03-18 04:36:53.406670 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-18 04:36:53.406721 | orchestrator | Wednesday 18 March 2026 04:36:49 +0000 (0:00:03.208) 0:00:16.558 ******* 2026-03-18 04:36:53.406728 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:36:53.406736 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:36:53.406743 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:36:53.406749 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:36:53.406756 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:36:53.406763 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:36:53.406769 | orchestrator | 2026-03-18 04:36:53.406777 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-18 04:36:53.406784 | orchestrator | Wednesday 18 March 2026 04:36:51 +0000 (0:00:01.413) 0:00:17.972 ******* 2026-03-18 04:36:53.406795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:53.406839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-18 04:36:54.719730 | orchestrator | 2026-03-18 04:36:54.719744 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-18 04:36:54.719757 | orchestrator | Wednesday 18 March 2026 04:36:53 +0000 (0:00:02.108) 0:00:20.080 ******* 2026-03-18 04:36:54.719769 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:36:54.719782 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719793 | orchestrator | } 2026-03-18 04:36:54.719807 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:36:54.719819 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719832 | orchestrator | } 2026-03-18 04:36:54.719845 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:36:54.719857 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719869 | orchestrator | } 2026-03-18 04:36:54.719881 | orchestrator | changed: [testbed-node-3] => { 2026-03-18 04:36:54.719903 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719916 | orchestrator | } 2026-03-18 04:36:54.719928 | orchestrator | changed: [testbed-node-4] => { 2026-03-18 04:36:54.719941 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719953 | orchestrator | } 2026-03-18 04:36:54.719967 | orchestrator | changed: [testbed-node-5] => { 2026-03-18 04:36:54.719979 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:36:54.719991 | orchestrator | } 2026-03-18 04:36:54.720004 | orchestrator | 2026-03-18 04:36:54.720017 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:36:54.720028 | orchestrator | Wednesday 18 March 2026 04:36:54 +0000 (0:00:00.886) 0:00:20.966 ******* 2026-03-18 04:36:54.720040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:36:54.720052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:36:54.720064 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:36:54.720075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:36:54.720101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:37:19.975309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:37:19.975428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:37:19.975472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:37:19.975487 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:37:19.975499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:37:19.975511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:37:19.975523 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-18 04:37:19.975535 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-18 04:37:19.975558 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:37:19.975569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:37:19.975613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:37:19.975634 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:37:19.975646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-18 04:37:19.975658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-18 04:37:19.975670 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:37:19.975681 | orchestrator | 2026-03-18 04:37:19.975771 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975787 | orchestrator | Wednesday 18 March 2026 04:36:56 +0000 (0:00:01.901) 0:00:22.868 ******* 2026-03-18 04:37:19.975798 | orchestrator | 2026-03-18 04:37:19.975809 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975820 | orchestrator | Wednesday 18 March 2026 04:36:56 +0000 (0:00:00.167) 0:00:23.035 ******* 2026-03-18 04:37:19.975831 | orchestrator | 2026-03-18 04:37:19.975842 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975855 | orchestrator | Wednesday 18 March 2026 04:36:56 +0000 (0:00:00.146) 0:00:23.182 ******* 2026-03-18 04:37:19.975867 | orchestrator | 2026-03-18 04:37:19.975879 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975892 | orchestrator | Wednesday 18 March 2026 04:36:56 +0000 (0:00:00.150) 0:00:23.332 ******* 2026-03-18 04:37:19.975905 | orchestrator | 2026-03-18 04:37:19.975918 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975930 | orchestrator | Wednesday 18 March 2026 04:36:57 +0000 (0:00:00.343) 0:00:23.676 ******* 2026-03-18 04:37:19.975942 | orchestrator | 2026-03-18 04:37:19.975954 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-18 04:37:19.975966 | orchestrator | Wednesday 18 March 2026 04:36:57 +0000 (0:00:00.147) 0:00:23.823 ******* 2026-03-18 04:37:19.975979 | orchestrator | 2026-03-18 04:37:19.975992 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-18 04:37:19.976004 | orchestrator | Wednesday 18 March 2026 04:36:57 +0000 (0:00:00.152) 0:00:23.976 ******* 2026-03-18 04:37:19.976024 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:37:19.976037 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:37:19.976049 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:37:19.976061 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:37:19.976074 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:37:19.976086 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:37:19.976099 | orchestrator | 2026-03-18 04:37:19.976112 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-18 04:37:19.976124 | orchestrator | Wednesday 18 March 2026 04:37:08 +0000 (0:00:11.081) 0:00:35.057 ******* 2026-03-18 04:37:19.976137 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:37:19.976150 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:37:19.976163 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:37:19.976175 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:37:19.976189 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:37:19.976201 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:37:19.976211 | orchestrator | 2026-03-18 04:37:19.976222 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-18 04:37:19.976233 | orchestrator | Wednesday 18 March 2026 04:37:09 +0000 (0:00:01.174) 0:00:36.232 ******* 2026-03-18 04:37:19.976250 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:37:19.976269 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:37:32.972940 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:37:32.973086 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:37:32.973112 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:37:32.973130 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:37:32.973146 | orchestrator | 2026-03-18 04:37:32.973165 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-18 04:37:32.973184 | orchestrator | Wednesday 18 March 2026 04:37:19 +0000 (0:00:10.306) 0:00:46.539 ******* 2026-03-18 04:37:32.973205 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-18 04:37:32.973226 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-18 04:37:32.973246 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-18 04:37:32.973265 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-18 04:37:32.973284 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-18 04:37:32.973297 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-18 04:37:32.973308 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-18 04:37:32.973319 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-18 04:37:32.973330 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-18 04:37:32.973342 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-18 04:37:32.973353 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-18 04:37:32.973363 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-18 04:37:32.973375 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973393 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973409 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973458 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973475 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973492 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-18 04:37:32.973510 | orchestrator | 2026-03-18 04:37:32.973527 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-18 04:37:32.973546 | orchestrator | Wednesday 18 March 2026 04:37:26 +0000 (0:00:06.395) 0:00:52.935 ******* 2026-03-18 04:37:32.973567 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-18 04:37:32.973587 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:37:32.973605 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-18 04:37:32.973624 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:37:32.973641 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-18 04:37:32.973658 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:37:32.973676 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-18 04:37:32.973693 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-18 04:37:32.973759 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-18 04:37:32.973771 | orchestrator | 2026-03-18 04:37:32.973791 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-18 04:37:32.973809 | orchestrator | Wednesday 18 March 2026 04:37:28 +0000 (0:00:02.263) 0:00:55.198 ******* 2026-03-18 04:37:32.973828 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-18 04:37:32.973846 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:37:32.973858 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-18 04:37:32.973868 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:37:32.973879 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-18 04:37:32.973890 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:37:32.973901 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-18 04:37:32.973911 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-18 04:37:32.973925 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-18 04:37:32.973943 | orchestrator | 2026-03-18 04:37:32.973962 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:37:32.973977 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:37:32.974103 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:37:32.974145 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-18 04:37:32.974157 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:37:32.974168 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:37:32.974178 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:37:32.974189 | orchestrator | 2026-03-18 04:37:32.974200 | orchestrator | 2026-03-18 04:37:32.974210 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:37:32.974221 | orchestrator | Wednesday 18 March 2026 04:37:32 +0000 (0:00:03.910) 0:00:59.108 ******* 2026-03-18 04:37:32.974232 | orchestrator | =============================================================================== 2026-03-18 04:37:32.974259 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.08s 2026-03-18 04:37:32.974270 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.31s 2026-03-18 04:37:32.974281 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.40s 2026-03-18 04:37:32.974291 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.91s 2026-03-18 04:37:32.974302 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.21s 2026-03-18 04:37:32.974313 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.26s 2026-03-18 04:37:32.974324 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.11s 2026-03-18 04:37:32.974334 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.08s 2026-03-18 04:37:32.974345 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.90s 2026-03-18 04:37:32.974355 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.89s 2026-03-18 04:37:32.974366 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.70s 2026-03-18 04:37:32.974376 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.68s 2026-03-18 04:37:32.974386 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.46s 2026-03-18 04:37:32.974397 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.41s 2026-03-18 04:37:32.974407 | orchestrator | module-load : Load modules ---------------------------------------------- 1.31s 2026-03-18 04:37:32.974424 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.17s 2026-03-18 04:37:32.974442 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.11s 2026-03-18 04:37:32.974461 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.08s 2026-03-18 04:37:32.974480 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2026-03-18 04:37:32.974498 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.89s 2026-03-18 04:37:33.294630 | orchestrator | + osism apply -a upgrade ovn 2026-03-18 04:37:35.560332 | orchestrator | 2026-03-18 04:37:35 | INFO  | Task 0b13bbd1-efff-408f-bf6a-928141cb4753 (ovn) was prepared for execution. 2026-03-18 04:37:35.560434 | orchestrator | 2026-03-18 04:37:35 | INFO  | It takes a moment until task 0b13bbd1-efff-408f-bf6a-928141cb4753 (ovn) has been started and output is visible here. 2026-03-18 04:37:58.061413 | orchestrator | 2026-03-18 04:37:58.061555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-18 04:37:58.061581 | orchestrator | 2026-03-18 04:37:58.061601 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-18 04:37:58.061619 | orchestrator | Wednesday 18 March 2026 04:37:41 +0000 (0:00:01.360) 0:00:01.360 ******* 2026-03-18 04:37:58.061639 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:37:58.061656 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:37:58.061674 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:37:58.061691 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:37:58.061708 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:37:58.061774 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:37:58.061792 | orchestrator | 2026-03-18 04:37:58.061810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-18 04:37:58.061829 | orchestrator | Wednesday 18 March 2026 04:37:44 +0000 (0:00:03.283) 0:00:04.643 ******* 2026-03-18 04:37:58.061847 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-18 04:37:58.061867 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-18 04:37:58.061884 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-18 04:37:58.061905 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-18 04:37:58.061922 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-18 04:37:58.061944 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-18 04:37:58.061996 | orchestrator | 2026-03-18 04:37:58.062101 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-18 04:37:58.062125 | orchestrator | 2026-03-18 04:37:58.062146 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-18 04:37:58.062166 | orchestrator | Wednesday 18 March 2026 04:37:47 +0000 (0:00:03.170) 0:00:07.813 ******* 2026-03-18 04:37:58.062205 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:37:58.062224 | orchestrator | 2026-03-18 04:37:58.062243 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-18 04:37:58.062261 | orchestrator | Wednesday 18 March 2026 04:37:50 +0000 (0:00:02.968) 0:00:10.782 ******* 2026-03-18 04:37:58.062282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062304 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062321 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062339 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062358 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062405 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062424 | orchestrator | 2026-03-18 04:37:58.062442 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-18 04:37:58.062460 | orchestrator | Wednesday 18 March 2026 04:37:53 +0000 (0:00:02.433) 0:00:13.216 ******* 2026-03-18 04:37:58.062479 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062558 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062576 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062594 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062613 | orchestrator | 2026-03-18 04:37:58.062630 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-18 04:37:58.062647 | orchestrator | Wednesday 18 March 2026 04:37:55 +0000 (0:00:02.684) 0:00:15.900 ******* 2026-03-18 04:37:58.062665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062683 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:37:58.062738 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380428 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380534 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380543 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380549 | orchestrator | 2026-03-18 04:38:06.380557 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-18 04:38:06.380565 | orchestrator | Wednesday 18 March 2026 04:37:58 +0000 (0:00:02.346) 0:00:18.247 ******* 2026-03-18 04:38:06.380571 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380578 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380584 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380590 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380633 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380640 | orchestrator | 2026-03-18 04:38:06.380646 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-18 04:38:06.380651 | orchestrator | Wednesday 18 March 2026 04:38:01 +0000 (0:00:03.153) 0:00:21.401 ******* 2026-03-18 04:38:06.380658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:38:06.380698 | orchestrator | 2026-03-18 04:38:06.380704 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-18 04:38:06.380710 | orchestrator | Wednesday 18 March 2026 04:38:04 +0000 (0:00:02.967) 0:00:24.369 ******* 2026-03-18 04:38:06.380773 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:38:06.380782 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380789 | orchestrator | } 2026-03-18 04:38:06.380795 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:38:06.380810 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380816 | orchestrator | } 2026-03-18 04:38:06.380822 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:38:06.380829 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380835 | orchestrator | } 2026-03-18 04:38:06.380848 | orchestrator | changed: [testbed-node-3] => { 2026-03-18 04:38:06.380855 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380860 | orchestrator | } 2026-03-18 04:38:06.380865 | orchestrator | changed: [testbed-node-4] => { 2026-03-18 04:38:06.380871 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380876 | orchestrator | } 2026-03-18 04:38:06.380882 | orchestrator | changed: [testbed-node-5] => { 2026-03-18 04:38:06.380887 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:38:06.380892 | orchestrator | } 2026-03-18 04:38:06.380897 | orchestrator | 2026-03-18 04:38:06.380903 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:38:06.380909 | orchestrator | Wednesday 18 March 2026 04:38:06 +0000 (0:00:02.074) 0:00:26.444 ******* 2026-03-18 04:38:06.380923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373523 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:38:35.373639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373662 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:38:35.373693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373705 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:38:35.373717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373729 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:38:35.373740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373802 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:38:35.373837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:38:35.373849 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:38:35.373861 | orchestrator | 2026-03-18 04:38:35.373873 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-18 04:38:35.373886 | orchestrator | Wednesday 18 March 2026 04:38:08 +0000 (0:00:02.537) 0:00:28.982 ******* 2026-03-18 04:38:35.373897 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:38:35.373909 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:38:35.373920 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:38:35.373930 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:38:35.373941 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:38:35.373952 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:38:35.373963 | orchestrator | 2026-03-18 04:38:35.373974 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-18 04:38:35.373986 | orchestrator | Wednesday 18 March 2026 04:38:12 +0000 (0:00:03.640) 0:00:32.622 ******* 2026-03-18 04:38:35.373997 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-18 04:38:35.374008 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-18 04:38:35.374076 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-18 04:38:35.374090 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-18 04:38:35.374102 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-18 04:38:35.374115 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-18 04:38:35.374127 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374139 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374151 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374163 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374174 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374203 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-18 04:38:35.374214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374227 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374238 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374267 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374278 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-18 04:38:35.374289 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374310 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374321 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374332 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374343 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374354 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-18 04:38:35.374365 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374375 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374386 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374397 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374411 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374430 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-18 04:38:35.374449 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374466 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374484 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374502 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374519 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374536 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-18 04:38:35.374554 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 04:38:35.374570 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 04:38:35.374587 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 04:38:35.374606 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-18 04:38:35.374623 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 04:38:35.374642 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-18 04:38:35.374662 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-18 04:38:35.374690 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-18 04:38:35.374704 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-18 04:38:35.374715 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-18 04:38:35.374725 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-18 04:38:35.374745 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-18 04:41:23.735789 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 04:41:23.736077 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 04:41:23.736111 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-18 04:41:23.736131 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 04:41:23.736173 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 04:41:23.736193 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-18 04:41:23.736211 | orchestrator | 2026-03-18 04:41:23.736231 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736252 | orchestrator | Wednesday 18 March 2026 04:38:32 +0000 (0:00:19.777) 0:00:52.400 ******* 2026-03-18 04:41:23.736269 | orchestrator | 2026-03-18 04:41:23.736286 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736305 | orchestrator | Wednesday 18 March 2026 04:38:32 +0000 (0:00:00.443) 0:00:52.843 ******* 2026-03-18 04:41:23.736326 | orchestrator | 2026-03-18 04:41:23.736347 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736367 | orchestrator | Wednesday 18 March 2026 04:38:33 +0000 (0:00:00.445) 0:00:53.289 ******* 2026-03-18 04:41:23.736389 | orchestrator | 2026-03-18 04:41:23.736410 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736431 | orchestrator | Wednesday 18 March 2026 04:38:33 +0000 (0:00:00.456) 0:00:53.746 ******* 2026-03-18 04:41:23.736452 | orchestrator | 2026-03-18 04:41:23.736473 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736494 | orchestrator | Wednesday 18 March 2026 04:38:34 +0000 (0:00:00.470) 0:00:54.216 ******* 2026-03-18 04:41:23.736510 | orchestrator | 2026-03-18 04:41:23.736523 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-18 04:41:23.736535 | orchestrator | Wednesday 18 March 2026 04:38:34 +0000 (0:00:00.452) 0:00:54.668 ******* 2026-03-18 04:41:23.736548 | orchestrator | 2026-03-18 04:41:23.736561 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-18 04:41:23.736573 | orchestrator | Wednesday 18 March 2026 04:38:35 +0000 (0:00:00.859) 0:00:55.528 ******* 2026-03-18 04:41:23.736586 | orchestrator | 2026-03-18 04:41:23.736598 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-18 04:41:23.736611 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:41:23.736625 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:41:23.736637 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:41:23.736650 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:41:23.736660 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:41:23.736671 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:41:23.736682 | orchestrator | 2026-03-18 04:41:23.736693 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-18 04:41:23.736704 | orchestrator | 2026-03-18 04:41:23.736714 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 04:41:23.736725 | orchestrator | Wednesday 18 March 2026 04:40:47 +0000 (0:02:11.796) 0:03:07.324 ******* 2026-03-18 04:41:23.736736 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:41:23.736746 | orchestrator | 2026-03-18 04:41:23.736757 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 04:41:23.736768 | orchestrator | Wednesday 18 March 2026 04:40:49 +0000 (0:00:01.937) 0:03:09.262 ******* 2026-03-18 04:41:23.736779 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-18 04:41:23.736790 | orchestrator | 2026-03-18 04:41:23.736800 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-18 04:41:23.736824 | orchestrator | Wednesday 18 March 2026 04:40:51 +0000 (0:00:01.968) 0:03:11.230 ******* 2026-03-18 04:41:23.736835 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.736847 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.736857 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.736868 | orchestrator | 2026-03-18 04:41:23.736879 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-18 04:41:23.736889 | orchestrator | Wednesday 18 March 2026 04:40:52 +0000 (0:00:01.903) 0:03:13.134 ******* 2026-03-18 04:41:23.736900 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.736911 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.736921 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.736932 | orchestrator | 2026-03-18 04:41:23.736974 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-18 04:41:23.736986 | orchestrator | Wednesday 18 March 2026 04:40:54 +0000 (0:00:01.417) 0:03:14.552 ******* 2026-03-18 04:41:23.736997 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737007 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737018 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737029 | orchestrator | 2026-03-18 04:41:23.737040 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-18 04:41:23.737050 | orchestrator | Wednesday 18 March 2026 04:40:55 +0000 (0:00:01.410) 0:03:15.962 ******* 2026-03-18 04:41:23.737061 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737072 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737083 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737093 | orchestrator | 2026-03-18 04:41:23.737104 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-18 04:41:23.737118 | orchestrator | Wednesday 18 March 2026 04:40:57 +0000 (0:00:01.583) 0:03:17.546 ******* 2026-03-18 04:41:23.737137 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737181 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737201 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737220 | orchestrator | 2026-03-18 04:41:23.737239 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-18 04:41:23.737257 | orchestrator | Wednesday 18 March 2026 04:40:58 +0000 (0:00:01.367) 0:03:18.913 ******* 2026-03-18 04:41:23.737275 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:41:23.737293 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:41:23.737311 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:41:23.737329 | orchestrator | 2026-03-18 04:41:23.737347 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-18 04:41:23.737366 | orchestrator | Wednesday 18 March 2026 04:41:00 +0000 (0:00:01.395) 0:03:20.309 ******* 2026-03-18 04:41:23.737385 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737401 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737415 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737434 | orchestrator | 2026-03-18 04:41:23.737463 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-18 04:41:23.737482 | orchestrator | Wednesday 18 March 2026 04:41:01 +0000 (0:00:01.825) 0:03:22.135 ******* 2026-03-18 04:41:23.737501 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737519 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737537 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737555 | orchestrator | 2026-03-18 04:41:23.737574 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-18 04:41:23.737592 | orchestrator | Wednesday 18 March 2026 04:41:03 +0000 (0:00:01.619) 0:03:23.754 ******* 2026-03-18 04:41:23.737611 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737630 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737648 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737665 | orchestrator | 2026-03-18 04:41:23.737676 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-18 04:41:23.737687 | orchestrator | Wednesday 18 March 2026 04:41:05 +0000 (0:00:01.919) 0:03:25.673 ******* 2026-03-18 04:41:23.737707 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.737724 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.737742 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.737760 | orchestrator | 2026-03-18 04:41:23.737777 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-18 04:41:23.737794 | orchestrator | Wednesday 18 March 2026 04:41:06 +0000 (0:00:01.398) 0:03:27.072 ******* 2026-03-18 04:41:23.737810 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:41:23.737828 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:41:23.737846 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:41:23.737864 | orchestrator | 2026-03-18 04:41:23.737882 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-18 04:41:23.737902 | orchestrator | Wednesday 18 March 2026 04:41:08 +0000 (0:00:01.360) 0:03:28.433 ******* 2026-03-18 04:41:23.737913 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:41:23.737924 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:41:23.737956 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:41:23.737968 | orchestrator | 2026-03-18 04:41:23.737979 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-18 04:41:23.737990 | orchestrator | Wednesday 18 March 2026 04:41:09 +0000 (0:00:01.387) 0:03:29.820 ******* 2026-03-18 04:41:23.738000 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.738011 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.738105 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.738116 | orchestrator | 2026-03-18 04:41:23.738127 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-18 04:41:23.738138 | orchestrator | Wednesday 18 March 2026 04:41:11 +0000 (0:00:01.775) 0:03:31.595 ******* 2026-03-18 04:41:23.738149 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.738160 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.738170 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.738181 | orchestrator | 2026-03-18 04:41:23.738192 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-18 04:41:23.738203 | orchestrator | Wednesday 18 March 2026 04:41:12 +0000 (0:00:01.370) 0:03:32.966 ******* 2026-03-18 04:41:23.738213 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.738224 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.738234 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.738245 | orchestrator | 2026-03-18 04:41:23.738256 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-18 04:41:23.738267 | orchestrator | Wednesday 18 March 2026 04:41:14 +0000 (0:00:02.128) 0:03:35.095 ******* 2026-03-18 04:41:23.738277 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:41:23.738288 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:41:23.738299 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:41:23.738309 | orchestrator | 2026-03-18 04:41:23.738320 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-18 04:41:23.738331 | orchestrator | Wednesday 18 March 2026 04:41:16 +0000 (0:00:01.483) 0:03:36.578 ******* 2026-03-18 04:41:23.738342 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:41:23.738353 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:41:23.738363 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:41:23.738374 | orchestrator | 2026-03-18 04:41:23.738384 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-18 04:41:23.738395 | orchestrator | Wednesday 18 March 2026 04:41:17 +0000 (0:00:01.397) 0:03:37.976 ******* 2026-03-18 04:41:23.738406 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:41:23.738417 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:41:23.738428 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:41:23.738438 | orchestrator | 2026-03-18 04:41:23.738449 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-18 04:41:23.738465 | orchestrator | Wednesday 18 March 2026 04:41:19 +0000 (0:00:01.701) 0:03:39.677 ******* 2026-03-18 04:41:23.738508 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.872881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873057 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873070 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873082 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:29.873106 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873159 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:29.873191 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:29.873215 | orchestrator | 2026-03-18 04:41:29.873229 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-18 04:41:29.873248 | orchestrator | Wednesday 18 March 2026 04:41:23 +0000 (0:00:04.242) 0:03:43.920 ******* 2026-03-18 04:41:29.873267 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873287 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:29.873361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760417 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760529 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:44.760556 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:44.760595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:44.760612 | orchestrator | 2026-03-18 04:41:44.760622 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-18 04:41:44.760632 | orchestrator | Wednesday 18 March 2026 04:41:29 +0000 (0:00:06.139) 0:03:50.060 ******* 2026-03-18 04:41:44.760641 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-18 04:41:44.760649 | orchestrator | 2026-03-18 04:41:44.760657 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-18 04:41:44.760665 | orchestrator | Wednesday 18 March 2026 04:41:31 +0000 (0:00:01.944) 0:03:52.004 ******* 2026-03-18 04:41:44.760673 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:41:44.760696 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:41:44.760717 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:41:44.760726 | orchestrator | 2026-03-18 04:41:44.760733 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-18 04:41:44.760741 | orchestrator | Wednesday 18 March 2026 04:41:33 +0000 (0:00:01.761) 0:03:53.766 ******* 2026-03-18 04:41:44.760748 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:41:44.760756 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:41:44.760763 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:41:44.760771 | orchestrator | 2026-03-18 04:41:44.760779 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-18 04:41:44.760787 | orchestrator | Wednesday 18 March 2026 04:41:36 +0000 (0:00:02.693) 0:03:56.459 ******* 2026-03-18 04:41:44.760794 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:41:44.760802 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:41:44.760809 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:41:44.760817 | orchestrator | 2026-03-18 04:41:44.760825 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-18 04:41:44.760832 | orchestrator | Wednesday 18 March 2026 04:41:39 +0000 (0:00:02.953) 0:03:59.413 ******* 2026-03-18 04:41:44.760841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:44.760904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:49.348597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.348722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:49.348766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.348780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:41:49.348791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.348803 | orchestrator | 2026-03-18 04:41:49.348816 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-18 04:41:49.348828 | orchestrator | Wednesday 18 March 2026 04:41:44 +0000 (0:00:05.517) 0:04:04.930 ******* 2026-03-18 04:41:49.348840 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:41:49.348852 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:41:49.348863 | orchestrator | } 2026-03-18 04:41:49.348874 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:41:49.348885 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:41:49.348895 | orchestrator | } 2026-03-18 04:41:49.348906 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:41:49.348917 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:41:49.348927 | orchestrator | } 2026-03-18 04:41:49.348938 | orchestrator | 2026-03-18 04:41:49.348949 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-18 04:41:49.348960 | orchestrator | Wednesday 18 March 2026 04:41:46 +0000 (0:00:01.478) 0:04:06.409 ******* 2026-03-18 04:41:49.349040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-18 04:41:49.349202 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-18 04:43:19.133616 | orchestrator | 2026-03-18 04:43:19.133750 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-18 04:43:19.133777 | orchestrator | Wednesday 18 March 2026 04:41:49 +0000 (0:00:03.121) 0:04:09.531 ******* 2026-03-18 04:43:19.133790 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-18 04:43:19.133802 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-18 04:43:19.133813 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-18 04:43:19.133824 | orchestrator | 2026-03-18 04:43:19.133836 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-18 04:43:19.133848 | orchestrator | Wednesday 18 March 2026 04:41:51 +0000 (0:00:02.262) 0:04:11.794 ******* 2026-03-18 04:43:19.133859 | orchestrator | changed: [testbed-node-0] => { 2026-03-18 04:43:19.133871 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:43:19.133882 | orchestrator | } 2026-03-18 04:43:19.133893 | orchestrator | changed: [testbed-node-1] => { 2026-03-18 04:43:19.133904 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:43:19.133915 | orchestrator | } 2026-03-18 04:43:19.133926 | orchestrator | changed: [testbed-node-2] => { 2026-03-18 04:43:19.133937 | orchestrator |  "msg": "Notifying handlers" 2026-03-18 04:43:19.133947 | orchestrator | } 2026-03-18 04:43:19.133958 | orchestrator | 2026-03-18 04:43:19.133969 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 04:43:19.133980 | orchestrator | Wednesday 18 March 2026 04:41:53 +0000 (0:00:01.410) 0:04:13.204 ******* 2026-03-18 04:43:19.133991 | orchestrator | 2026-03-18 04:43:19.134002 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 04:43:19.134014 | orchestrator | Wednesday 18 March 2026 04:41:53 +0000 (0:00:00.473) 0:04:13.677 ******* 2026-03-18 04:43:19.134138 | orchestrator | 2026-03-18 04:43:19.134152 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-18 04:43:19.134165 | orchestrator | Wednesday 18 March 2026 04:41:53 +0000 (0:00:00.449) 0:04:14.126 ******* 2026-03-18 04:43:19.134177 | orchestrator | 2026-03-18 04:43:19.134189 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-18 04:43:19.134202 | orchestrator | Wednesday 18 March 2026 04:41:54 +0000 (0:00:01.039) 0:04:15.166 ******* 2026-03-18 04:43:19.134215 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:43:19.134228 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:43:19.134244 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:43:19.134262 | orchestrator | 2026-03-18 04:43:19.134274 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-18 04:43:19.134285 | orchestrator | Wednesday 18 March 2026 04:42:12 +0000 (0:00:17.326) 0:04:32.492 ******* 2026-03-18 04:43:19.134296 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:43:19.134307 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:43:19.134318 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:43:19.134328 | orchestrator | 2026-03-18 04:43:19.134339 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-18 04:43:19.134350 | orchestrator | Wednesday 18 March 2026 04:42:29 +0000 (0:00:16.860) 0:04:49.352 ******* 2026-03-18 04:43:19.134361 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-18 04:43:19.134372 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-18 04:43:19.134383 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-18 04:43:19.134394 | orchestrator | 2026-03-18 04:43:19.134405 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-18 04:43:19.134416 | orchestrator | Wednesday 18 March 2026 04:42:40 +0000 (0:00:11.695) 0:05:01.048 ******* 2026-03-18 04:43:19.134453 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:43:19.134465 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:43:19.134476 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:43:19.134487 | orchestrator | 2026-03-18 04:43:19.134497 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-18 04:43:19.134508 | orchestrator | Wednesday 18 March 2026 04:42:58 +0000 (0:00:17.602) 0:05:18.650 ******* 2026-03-18 04:43:19.134519 | orchestrator | Pausing for 5 seconds 2026-03-18 04:43:19.134530 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:43:19.134541 | orchestrator | 2026-03-18 04:43:19.134552 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-18 04:43:19.134562 | orchestrator | Wednesday 18 March 2026 04:43:04 +0000 (0:00:06.206) 0:05:24.856 ******* 2026-03-18 04:43:19.134573 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:43:19.134584 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:43:19.134594 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:43:19.134605 | orchestrator | 2026-03-18 04:43:19.134616 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-18 04:43:19.134627 | orchestrator | Wednesday 18 March 2026 04:43:06 +0000 (0:00:01.852) 0:05:26.709 ******* 2026-03-18 04:43:19.134638 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:43:19.134663 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:43:19.134674 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:43:19.134685 | orchestrator | 2026-03-18 04:43:19.134696 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-18 04:43:19.134707 | orchestrator | Wednesday 18 March 2026 04:43:08 +0000 (0:00:01.801) 0:05:28.511 ******* 2026-03-18 04:43:19.134724 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:43:19.134741 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:43:19.134761 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:43:19.134779 | orchestrator | 2026-03-18 04:43:19.134799 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-18 04:43:19.134815 | orchestrator | Wednesday 18 March 2026 04:43:10 +0000 (0:00:01.850) 0:05:30.362 ******* 2026-03-18 04:43:19.134826 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:43:19.134837 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:43:19.134847 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:43:19.134858 | orchestrator | 2026-03-18 04:43:19.134868 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-18 04:43:19.134879 | orchestrator | Wednesday 18 March 2026 04:43:12 +0000 (0:00:01.975) 0:05:32.337 ******* 2026-03-18 04:43:19.134890 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:43:19.134900 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:43:19.134911 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:43:19.134921 | orchestrator | 2026-03-18 04:43:19.134932 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-18 04:43:19.134961 | orchestrator | Wednesday 18 March 2026 04:43:13 +0000 (0:00:01.835) 0:05:34.173 ******* 2026-03-18 04:43:19.134972 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:43:19.134983 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:43:19.134993 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:43:19.135004 | orchestrator | 2026-03-18 04:43:19.135015 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-18 04:43:19.135026 | orchestrator | Wednesday 18 March 2026 04:43:15 +0000 (0:00:01.809) 0:05:35.982 ******* 2026-03-18 04:43:19.135036 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-18 04:43:19.135047 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-18 04:43:19.135058 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-18 04:43:19.135094 | orchestrator | 2026-03-18 04:43:19.135107 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 04:43:19.135119 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-18 04:43:19.135131 | orchestrator | testbed-node-1 : ok=49  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-18 04:43:19.135152 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-18 04:43:19.135163 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:43:19.135174 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:43:19.135184 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 04:43:19.135195 | orchestrator | 2026-03-18 04:43:19.135206 | orchestrator | 2026-03-18 04:43:19.135217 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 04:43:19.135228 | orchestrator | Wednesday 18 March 2026 04:43:18 +0000 (0:00:02.931) 0:05:38.913 ******* 2026-03-18 04:43:19.135239 | orchestrator | =============================================================================== 2026-03-18 04:43:19.135250 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.80s 2026-03-18 04:43:19.135260 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.78s 2026-03-18 04:43:19.135271 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.60s 2026-03-18 04:43:19.135282 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.33s 2026-03-18 04:43:19.135292 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.86s 2026-03-18 04:43:19.135303 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 11.70s 2026-03-18 04:43:19.135314 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.21s 2026-03-18 04:43:19.135324 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.14s 2026-03-18 04:43:19.135335 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.52s 2026-03-18 04:43:19.135346 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.24s 2026-03-18 04:43:19.135357 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.64s 2026-03-18 04:43:19.135367 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.28s 2026-03-18 04:43:19.135378 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.17s 2026-03-18 04:43:19.135389 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.15s 2026-03-18 04:43:19.135399 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.13s 2026-03-18 04:43:19.135410 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.12s 2026-03-18 04:43:19.135421 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.97s 2026-03-18 04:43:19.135437 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.97s 2026-03-18 04:43:19.135448 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.95s 2026-03-18 04:43:19.135459 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.93s 2026-03-18 04:43:19.470123 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-18 04:43:19.470232 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-18 04:43:19.470247 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-18 04:43:19.478477 | orchestrator | + set -e 2026-03-18 04:43:19.478581 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 04:43:19.478592 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 04:43:19.478602 | orchestrator | ++ INTERACTIVE=false 2026-03-18 04:43:19.478610 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 04:43:19.478619 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 04:43:19.478635 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-18 04:43:21.662496 | orchestrator | 2026-03-18 04:43:21 | INFO  | Task 08ce314a-4a39-42a8-8493-e68a06914d0b (ceph-rolling_update) was prepared for execution. 2026-03-18 04:43:21.662652 | orchestrator | 2026-03-18 04:43:21 | INFO  | It takes a moment until task 08ce314a-4a39-42a8-8493-e68a06914d0b (ceph-rolling_update) has been started and output is visible here. 2026-03-18 04:44:20.918412 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-18 04:44:20.918545 | orchestrator | 2.16.14 2026-03-18 04:44:20.918564 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-18 04:44:20.918577 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-18 04:44:20.918600 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-18 04:44:20.918612 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-18 04:44:20.918633 | orchestrator | 2026-03-18 04:44:20.918645 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-18 04:44:20.918656 | orchestrator | 2026-03-18 04:44:20.918666 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-18 04:44:20.918677 | orchestrator | Wednesday 18 March 2026 04:43:29 +0000 (0:00:01.494) 0:00:01.494 ******* 2026-03-18 04:44:20.918688 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-18 04:44:20.918699 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-18 04:44:20.918710 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-18 04:44:20.918722 | orchestrator | skipping: [localhost] 2026-03-18 04:44:20.918733 | orchestrator | 2026-03-18 04:44:20.918744 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-18 04:44:20.918755 | orchestrator | 2026-03-18 04:44:20.918766 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-18 04:44:20.918777 | orchestrator | Wednesday 18 March 2026 04:43:30 +0000 (0:00:00.896) 0:00:02.390 ******* 2026-03-18 04:44:20.918787 | orchestrator | ok: [testbed-node-0] => { 2026-03-18 04:44:20.918799 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918811 | orchestrator | } 2026-03-18 04:44:20.918822 | orchestrator | ok: [testbed-node-1] => { 2026-03-18 04:44:20.918833 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918843 | orchestrator | } 2026-03-18 04:44:20.918854 | orchestrator | ok: [testbed-node-2] => { 2026-03-18 04:44:20.918865 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918875 | orchestrator | } 2026-03-18 04:44:20.918886 | orchestrator | ok: [testbed-node-3] => { 2026-03-18 04:44:20.918897 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918908 | orchestrator | } 2026-03-18 04:44:20.918919 | orchestrator | ok: [testbed-node-4] => { 2026-03-18 04:44:20.918929 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918941 | orchestrator | } 2026-03-18 04:44:20.918953 | orchestrator | ok: [testbed-node-5] => { 2026-03-18 04:44:20.918966 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.918978 | orchestrator | } 2026-03-18 04:44:20.918990 | orchestrator | ok: [testbed-manager] => { 2026-03-18 04:44:20.919002 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-18 04:44:20.919014 | orchestrator | } 2026-03-18 04:44:20.919026 | orchestrator | 2026-03-18 04:44:20.919039 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-18 04:44:20.919051 | orchestrator | Wednesday 18 March 2026 04:43:32 +0000 (0:00:02.034) 0:00:04.424 ******* 2026-03-18 04:44:20.919088 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:20.919101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:20.919114 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:20.919126 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:20.919174 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:20.919188 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:20.919201 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919213 | orchestrator | 2026-03-18 04:44:20.919225 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-18 04:44:20.919238 | orchestrator | Wednesday 18 March 2026 04:43:37 +0000 (0:00:04.575) 0:00:09.000 ******* 2026-03-18 04:44:20.919250 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:44:20.919263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:44:20.919275 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:44:20.919287 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:44:20.919315 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:44:20.919326 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:44:20.919337 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:44:20.919348 | orchestrator | 2026-03-18 04:44:20.919359 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-18 04:44:20.919369 | orchestrator | Wednesday 18 March 2026 04:44:07 +0000 (0:00:30.592) 0:00:39.592 ******* 2026-03-18 04:44:20.919380 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919391 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.919402 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.919413 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.919423 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.919434 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.919444 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919455 | orchestrator | 2026-03-18 04:44:20.919466 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:44:20.919477 | orchestrator | Wednesday 18 March 2026 04:44:08 +0000 (0:00:00.957) 0:00:40.550 ******* 2026-03-18 04:44:20.919506 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-18 04:44:20.919519 | orchestrator | 2026-03-18 04:44:20.919530 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:44:20.919540 | orchestrator | Wednesday 18 March 2026 04:44:10 +0000 (0:00:01.875) 0:00:42.425 ******* 2026-03-18 04:44:20.919551 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919562 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.919572 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.919583 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.919593 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.919604 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.919615 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919625 | orchestrator | 2026-03-18 04:44:20.919636 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:44:20.919647 | orchestrator | Wednesday 18 March 2026 04:44:12 +0000 (0:00:01.365) 0:00:43.791 ******* 2026-03-18 04:44:20.919657 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919668 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.919678 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.919689 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.919699 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.919710 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.919721 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919731 | orchestrator | 2026-03-18 04:44:20.919742 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:44:20.919762 | orchestrator | Wednesday 18 March 2026 04:44:12 +0000 (0:00:00.779) 0:00:44.570 ******* 2026-03-18 04:44:20.919773 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919784 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.919794 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.919805 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.919815 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.919826 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.919837 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919847 | orchestrator | 2026-03-18 04:44:20.919858 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:44:20.919869 | orchestrator | Wednesday 18 March 2026 04:44:14 +0000 (0:00:01.412) 0:00:45.983 ******* 2026-03-18 04:44:20.919879 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919890 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.919900 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.919911 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.919921 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.919932 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.919942 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.919953 | orchestrator | 2026-03-18 04:44:20.919964 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:44:20.919974 | orchestrator | Wednesday 18 March 2026 04:44:15 +0000 (0:00:00.806) 0:00:46.790 ******* 2026-03-18 04:44:20.919985 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.919995 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.920006 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.920016 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.920027 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.920038 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.920048 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.920059 | orchestrator | 2026-03-18 04:44:20.920070 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:44:20.920080 | orchestrator | Wednesday 18 March 2026 04:44:16 +0000 (0:00:01.080) 0:00:47.870 ******* 2026-03-18 04:44:20.920091 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.920102 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.920113 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.920123 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.920157 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.920168 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.920179 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.920189 | orchestrator | 2026-03-18 04:44:20.920200 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:44:20.920211 | orchestrator | Wednesday 18 March 2026 04:44:16 +0000 (0:00:00.742) 0:00:48.613 ******* 2026-03-18 04:44:20.920222 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:20.920232 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:20.920243 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:20.920254 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:20.920264 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:20.920275 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:20.920285 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:20.920296 | orchestrator | 2026-03-18 04:44:20.920307 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:44:20.920318 | orchestrator | Wednesday 18 March 2026 04:44:17 +0000 (0:00:00.976) 0:00:49.589 ******* 2026-03-18 04:44:20.920328 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.920339 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.920350 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.920360 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.920371 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.920381 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.920392 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.920403 | orchestrator | 2026-03-18 04:44:20.920414 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:44:20.920431 | orchestrator | Wednesday 18 March 2026 04:44:18 +0000 (0:00:00.759) 0:00:50.348 ******* 2026-03-18 04:44:20.920442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:44:20.920454 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:44:20.920464 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:44:20.920475 | orchestrator | 2026-03-18 04:44:20.920486 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:44:20.920496 | orchestrator | Wednesday 18 March 2026 04:44:19 +0000 (0:00:01.180) 0:00:51.529 ******* 2026-03-18 04:44:20.920507 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:20.920518 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:20.920529 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:20.920539 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:20.920550 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:20.920560 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:20.920571 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:20.920582 | orchestrator | 2026-03-18 04:44:20.920593 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:44:20.920611 | orchestrator | Wednesday 18 March 2026 04:44:20 +0000 (0:00:00.993) 0:00:52.523 ******* 2026-03-18 04:44:32.452232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:44:32.452349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:44:32.452365 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:44:32.452377 | orchestrator | 2026-03-18 04:44:32.452389 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:44:32.452401 | orchestrator | Wednesday 18 March 2026 04:44:23 +0000 (0:00:02.285) 0:00:54.808 ******* 2026-03-18 04:44:32.452413 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:44:32.452425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:44:32.452436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:44:32.452491 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.452505 | orchestrator | 2026-03-18 04:44:32.452517 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:44:32.452528 | orchestrator | Wednesday 18 March 2026 04:44:23 +0000 (0:00:00.413) 0:00:55.221 ******* 2026-03-18 04:44:32.452541 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452567 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452578 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.452589 | orchestrator | 2026-03-18 04:44:32.452600 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:44:32.452612 | orchestrator | Wednesday 18 March 2026 04:44:24 +0000 (0:00:00.932) 0:00:56.154 ******* 2026-03-18 04:44:32.452624 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452658 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452670 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:32.452681 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.452692 | orchestrator | 2026-03-18 04:44:32.452703 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:44:32.452719 | orchestrator | Wednesday 18 March 2026 04:44:24 +0000 (0:00:00.166) 0:00:56.321 ******* 2026-03-18 04:44:32.452740 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'dfaa0207b10e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:44:21.614235', 'end': '2026-03-18 04:44:21.666338', 'delta': '0:00:00.052103', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dfaa0207b10e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:44:32.452787 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1edfdf2d0145', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:44:22.210109', 'end': '2026-03-18 04:44:22.248474', 'delta': '0:00:00.038365', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1edfdf2d0145'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:44:32.452809 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fc8e238828f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:44:22.987172', 'end': '2026-03-18 04:44:23.058789', 'delta': '0:00:00.071617', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc8e238828f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:44:32.452830 | orchestrator | 2026-03-18 04:44:32.452841 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:44:32.452852 | orchestrator | Wednesday 18 March 2026 04:44:25 +0000 (0:00:00.449) 0:00:56.771 ******* 2026-03-18 04:44:32.452863 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:32.452874 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:32.452885 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:32.452905 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:32.452916 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:32.452926 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:32.452937 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:32.452948 | orchestrator | 2026-03-18 04:44:32.452959 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:44:32.452970 | orchestrator | Wednesday 18 March 2026 04:44:26 +0000 (0:00:01.009) 0:00:57.780 ******* 2026-03-18 04:44:32.452981 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.452992 | orchestrator | 2026-03-18 04:44:32.453003 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:44:32.453014 | orchestrator | Wednesday 18 March 2026 04:44:26 +0000 (0:00:00.281) 0:00:58.061 ******* 2026-03-18 04:44:32.453025 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:32.453036 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:32.453046 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:32.453057 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:32.453068 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:32.453078 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:32.453089 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:32.453100 | orchestrator | 2026-03-18 04:44:32.453110 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:44:32.453121 | orchestrator | Wednesday 18 March 2026 04:44:27 +0000 (0:00:00.997) 0:00:59.059 ******* 2026-03-18 04:44:32.453132 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:32.453168 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453180 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453202 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453212 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453223 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-18 04:44:32.453234 | orchestrator | 2026-03-18 04:44:32.453245 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:44:32.453256 | orchestrator | Wednesday 18 March 2026 04:44:29 +0000 (0:00:02.229) 0:01:01.288 ******* 2026-03-18 04:44:32.453273 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:32.453284 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:32.453295 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:32.453305 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:32.453316 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:32.453326 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:32.453337 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:32.453348 | orchestrator | 2026-03-18 04:44:32.453359 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:44:32.453370 | orchestrator | Wednesday 18 March 2026 04:44:30 +0000 (0:00:01.057) 0:01:02.346 ******* 2026-03-18 04:44:32.453380 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.453391 | orchestrator | 2026-03-18 04:44:32.453402 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:44:32.453413 | orchestrator | Wednesday 18 March 2026 04:44:30 +0000 (0:00:00.145) 0:01:02.491 ******* 2026-03-18 04:44:32.453423 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.453434 | orchestrator | 2026-03-18 04:44:32.453445 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:44:32.453456 | orchestrator | Wednesday 18 March 2026 04:44:31 +0000 (0:00:00.239) 0:01:02.731 ******* 2026-03-18 04:44:32.453466 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:32.453477 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:32.453488 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:32.453499 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:32.453509 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:32.453527 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.275828 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.275967 | orchestrator | 2026-03-18 04:44:38.275998 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:44:38.276021 | orchestrator | Wednesday 18 March 2026 04:44:32 +0000 (0:00:01.321) 0:01:04.052 ******* 2026-03-18 04:44:38.276034 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.276045 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.276056 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.276072 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.276090 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.276109 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.276126 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.276145 | orchestrator | 2026-03-18 04:44:38.276259 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:44:38.276280 | orchestrator | Wednesday 18 March 2026 04:44:33 +0000 (0:00:00.812) 0:01:04.864 ******* 2026-03-18 04:44:38.276298 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.276318 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.276337 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.276355 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.276375 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.276394 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.276413 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.276426 | orchestrator | 2026-03-18 04:44:38.276439 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:44:38.276451 | orchestrator | Wednesday 18 March 2026 04:44:34 +0000 (0:00:01.060) 0:01:05.925 ******* 2026-03-18 04:44:38.276464 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.276484 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.276505 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.276518 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.276537 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.276556 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.276576 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.276596 | orchestrator | 2026-03-18 04:44:38.276616 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:44:38.276638 | orchestrator | Wednesday 18 March 2026 04:44:35 +0000 (0:00:00.801) 0:01:06.726 ******* 2026-03-18 04:44:38.276659 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.276678 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.276696 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.276716 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.276731 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.276743 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.276760 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.276779 | orchestrator | 2026-03-18 04:44:38.276819 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:44:38.276857 | orchestrator | Wednesday 18 March 2026 04:44:36 +0000 (0:00:01.055) 0:01:07.781 ******* 2026-03-18 04:44:38.276878 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.276891 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.276902 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.276913 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.276928 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.276946 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.276967 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.276985 | orchestrator | 2026-03-18 04:44:38.277005 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:44:38.277021 | orchestrator | Wednesday 18 March 2026 04:44:36 +0000 (0:00:00.758) 0:01:08.540 ******* 2026-03-18 04:44:38.277039 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.277058 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.277078 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.277134 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:38.277184 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:38.277204 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:38.277224 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:38.277236 | orchestrator | 2026-03-18 04:44:38.277247 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:44:38.277258 | orchestrator | Wednesday 18 March 2026 04:44:37 +0000 (0:00:00.996) 0:01:09.536 ******* 2026-03-18 04:44:38.277295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:38.277437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.277483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:38.277520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.408827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.408941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.408958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.408970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:38.409020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:38.409129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409245 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:38.409277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.409331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:38.583553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583698 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:38.583729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:38.583744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}})  2026-03-18 04:44:38.583817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:38.583835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}})  2026-03-18 04:44:38.583848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.583906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:38.727743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.727850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:38.727893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.727908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}})  2026-03-18 04:44:38.727922 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:38.727950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}})  2026-03-18 04:44:38.727963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.727997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:38.728027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.728039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.728056 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:38.728068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:38.728079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}})  2026-03-18 04:44:38.728099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:39.070640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}})  2026-03-18 04:44:39.070773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.070802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.070840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:39.070862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.070879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:39.070898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.070968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}})  2026-03-18 04:44:39.070991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}})  2026-03-18 04:44:39.071009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.071038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:39.071072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.071102 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:39.203112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}})  2026-03-18 04:44:39.203361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:39.203401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}})  2026-03-18 04:44:39.203436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:39.203492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.203532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}})  2026-03-18 04:44:39.203556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}})  2026-03-18 04:44:39.203596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.213806 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:39.213908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:39.213927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.213962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.213975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:44:39.213987 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:39.213998 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214074 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214088 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214099 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-19-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:44:39.214116 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214128 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214181 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.214204 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93c1740b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:44:39.720994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.721097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:44:39.721113 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:39.721127 | orchestrator | 2026-03-18 04:44:39.721140 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:44:39.721214 | orchestrator | Wednesday 18 March 2026 04:44:39 +0000 (0:00:01.396) 0:01:10.933 ******* 2026-03-18 04:44:39.721230 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721290 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721333 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.721396 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.888930 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889026 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:39.889059 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889091 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889103 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889116 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889128 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889183 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889202 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889226 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889240 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:39.889260 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179575 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:40.179716 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179736 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179749 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179761 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179774 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179785 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179839 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179871 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179882 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.179901 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:40.179925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.332895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443993 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.443998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.444009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.444021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505442 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505481 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505540 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.505586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599556 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.599612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764382 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764471 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.764734 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:40.867886 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:40.867981 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:40.867998 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868015 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868050 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868063 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-19-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868089 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868101 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868129 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868145 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '93c1740b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part16', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part14', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part15', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part1', 'scsi-SQEMU_QEMU_HARDDISK_93c1740b-d129-4fb1-8a5c-0a256369ea5e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868217 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868230 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:44:40.868241 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:40.868252 | orchestrator | 2026-03-18 04:44:40.868285 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:44:55.395613 | orchestrator | Wednesday 18 March 2026 04:44:40 +0000 (0:00:01.546) 0:01:12.479 ******* 2026-03-18 04:44:55.395731 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:55.395748 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:55.395760 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:55.395771 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:55.395809 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:55.395821 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:55.395832 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:55.395843 | orchestrator | 2026-03-18 04:44:55.395854 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:44:55.395865 | orchestrator | Wednesday 18 March 2026 04:44:42 +0000 (0:00:01.394) 0:01:13.873 ******* 2026-03-18 04:44:55.395876 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:55.395886 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:55.395897 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:55.395907 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:55.395918 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:55.395928 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:55.395939 | orchestrator | ok: [testbed-manager] 2026-03-18 04:44:55.395950 | orchestrator | 2026-03-18 04:44:55.395960 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:44:55.395972 | orchestrator | Wednesday 18 March 2026 04:44:43 +0000 (0:00:00.859) 0:01:14.732 ******* 2026-03-18 04:44:55.395983 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:44:55.395994 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:44:55.396004 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:44:55.396015 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:55.396025 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:55.396036 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:55.396047 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:55.396058 | orchestrator | 2026-03-18 04:44:55.396069 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:44:55.396080 | orchestrator | Wednesday 18 March 2026 04:44:44 +0000 (0:00:01.321) 0:01:16.053 ******* 2026-03-18 04:44:55.396090 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:55.396101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:55.396112 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:55.396122 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.396133 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.396144 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.396154 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:55.396189 | orchestrator | 2026-03-18 04:44:55.396201 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:44:55.396212 | orchestrator | Wednesday 18 March 2026 04:44:45 +0000 (0:00:00.795) 0:01:16.849 ******* 2026-03-18 04:44:55.396223 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:55.396233 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:55.396244 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:55.396254 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.396265 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.396276 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.396286 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-18 04:44:55.396297 | orchestrator | 2026-03-18 04:44:55.396308 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:44:55.396319 | orchestrator | Wednesday 18 March 2026 04:44:46 +0000 (0:00:01.628) 0:01:18.477 ******* 2026-03-18 04:44:55.396336 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:55.396355 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:55.396374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:55.396393 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.396412 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.396432 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.396451 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:55.396472 | orchestrator | 2026-03-18 04:44:55.396492 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:44:55.396512 | orchestrator | Wednesday 18 March 2026 04:44:47 +0000 (0:00:00.767) 0:01:19.245 ******* 2026-03-18 04:44:55.396534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:44:55.396566 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-18 04:44:55.396585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 04:44:55.396603 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-18 04:44:55.396622 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:44:55.396633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 04:44:55.396644 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 04:44:55.396654 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-18 04:44:55.396665 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 04:44:55.396675 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-18 04:44:55.396686 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 04:44:55.396697 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 04:44:55.396708 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:44:55.396718 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 04:44:55.396729 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-18 04:44:55.396739 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 04:44:55.396750 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 04:44:55.396760 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 04:44:55.396770 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-18 04:44:55.396781 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 04:44:55.396791 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-18 04:44:55.396802 | orchestrator | 2026-03-18 04:44:55.396812 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:44:55.396823 | orchestrator | Wednesday 18 March 2026 04:44:49 +0000 (0:00:01.868) 0:01:21.114 ******* 2026-03-18 04:44:55.396834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:44:55.396845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:44:55.396875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:44:55.396886 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:55.396897 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:44:55.396907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:44:55.396918 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:44:55.396928 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:55.396939 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:44:55.396949 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:44:55.396959 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:44:55.396970 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:55.396980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 04:44:55.396991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 04:44:55.397001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 04:44:55.397012 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397022 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 04:44:55.397032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 04:44:55.397043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 04:44:55.397054 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.397064 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 04:44:55.397075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 04:44:55.397085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 04:44:55.397096 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.397106 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-18 04:44:55.397125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-18 04:44:55.397135 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-18 04:44:55.397146 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:55.397156 | orchestrator | 2026-03-18 04:44:55.397214 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:44:55.397226 | orchestrator | Wednesday 18 March 2026 04:44:50 +0000 (0:00:01.166) 0:01:22.280 ******* 2026-03-18 04:44:55.397276 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:44:55.397288 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:44:55.397298 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:44:55.397309 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:44:55.397321 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:44:55.397332 | orchestrator | 2026-03-18 04:44:55.397343 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:44:55.397355 | orchestrator | Wednesday 18 March 2026 04:44:51 +0000 (0:00:01.091) 0:01:23.371 ******* 2026-03-18 04:44:55.397366 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397377 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.397387 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.397398 | orchestrator | 2026-03-18 04:44:55.397409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:44:55.397419 | orchestrator | Wednesday 18 March 2026 04:44:52 +0000 (0:00:00.609) 0:01:23.981 ******* 2026-03-18 04:44:55.397430 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397441 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.397452 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.397462 | orchestrator | 2026-03-18 04:44:55.397473 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:44:55.397484 | orchestrator | Wednesday 18 March 2026 04:44:52 +0000 (0:00:00.364) 0:01:24.346 ******* 2026-03-18 04:44:55.397494 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397505 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:44:55.397515 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:44:55.397526 | orchestrator | 2026-03-18 04:44:55.397536 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:44:55.397552 | orchestrator | Wednesday 18 March 2026 04:44:53 +0000 (0:00:00.353) 0:01:24.699 ******* 2026-03-18 04:44:55.397563 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:44:55.397574 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:44:55.397585 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:44:55.397595 | orchestrator | 2026-03-18 04:44:55.397606 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:44:55.397617 | orchestrator | Wednesday 18 March 2026 04:44:53 +0000 (0:00:00.433) 0:01:25.132 ******* 2026-03-18 04:44:55.397628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:44:55.397638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:44:55.397649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:44:55.397659 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397670 | orchestrator | 2026-03-18 04:44:55.397681 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:44:55.397691 | orchestrator | Wednesday 18 March 2026 04:44:53 +0000 (0:00:00.376) 0:01:25.509 ******* 2026-03-18 04:44:55.397702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:44:55.397712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:44:55.397723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:44:55.397734 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:44:55.397745 | orchestrator | 2026-03-18 04:44:55.397755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:44:55.397774 | orchestrator | Wednesday 18 March 2026 04:44:54 +0000 (0:00:00.719) 0:01:26.228 ******* 2026-03-18 04:44:55.397785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:44:55.397802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:45:24.651702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:45:24.651819 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.651835 | orchestrator | 2026-03-18 04:45:24.651849 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:45:24.651861 | orchestrator | Wednesday 18 March 2026 04:44:55 +0000 (0:00:00.770) 0:01:26.998 ******* 2026-03-18 04:45:24.651873 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:45:24.651884 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:45:24.651895 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:45:24.651906 | orchestrator | 2026-03-18 04:45:24.651917 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:45:24.651927 | orchestrator | Wednesday 18 March 2026 04:44:56 +0000 (0:00:00.640) 0:01:27.638 ******* 2026-03-18 04:45:24.651938 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 04:45:24.651949 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 04:45:24.651960 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 04:45:24.651971 | orchestrator | 2026-03-18 04:45:24.651982 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:45:24.651992 | orchestrator | Wednesday 18 March 2026 04:44:56 +0000 (0:00:00.555) 0:01:28.193 ******* 2026-03-18 04:45:24.652003 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:45:24.652014 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:45:24.652026 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:45:24.652036 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:45:24.652047 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:45:24.652058 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:45:24.652069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:45:24.652081 | orchestrator | 2026-03-18 04:45:24.652092 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:45:24.652102 | orchestrator | Wednesday 18 March 2026 04:44:57 +0000 (0:00:00.835) 0:01:29.029 ******* 2026-03-18 04:45:24.652113 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:45:24.652124 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:45:24.652135 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:45:24.652145 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:45:24.652156 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:45:24.652167 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:45:24.652177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:45:24.652188 | orchestrator | 2026-03-18 04:45:24.652231 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-18 04:45:24.652243 | orchestrator | Wednesday 18 March 2026 04:44:59 +0000 (0:00:02.347) 0:01:31.377 ******* 2026-03-18 04:45:24.652256 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:45:24.652269 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:45:24.652281 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:45:24.652294 | orchestrator | changed: [testbed-manager] 2026-03-18 04:45:24.652306 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:45:24.652343 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:45:24.652361 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:45:24.652379 | orchestrator | 2026-03-18 04:45:24.652398 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-18 04:45:24.652415 | orchestrator | Wednesday 18 March 2026 04:45:09 +0000 (0:00:10.143) 0:01:41.520 ******* 2026-03-18 04:45:24.652432 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.652450 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.652469 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.652507 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.652527 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.652541 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.652552 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.652562 | orchestrator | 2026-03-18 04:45:24.652573 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-18 04:45:24.652584 | orchestrator | Wednesday 18 March 2026 04:45:10 +0000 (0:00:01.057) 0:01:42.578 ******* 2026-03-18 04:45:24.652594 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.652605 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.652615 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.652626 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.652637 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.652647 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.652658 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.652668 | orchestrator | 2026-03-18 04:45:24.652679 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-18 04:45:24.652690 | orchestrator | Wednesday 18 March 2026 04:45:11 +0000 (0:00:00.750) 0:01:43.329 ******* 2026-03-18 04:45:24.652701 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.652711 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:45:24.652722 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:45:24.652732 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:45:24.652743 | orchestrator | changed: [testbed-node-3] 2026-03-18 04:45:24.652754 | orchestrator | changed: [testbed-node-4] 2026-03-18 04:45:24.652764 | orchestrator | changed: [testbed-node-5] 2026-03-18 04:45:24.652775 | orchestrator | 2026-03-18 04:45:24.652785 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-18 04:45:24.652797 | orchestrator | Wednesday 18 March 2026 04:45:13 +0000 (0:00:02.255) 0:01:45.584 ******* 2026-03-18 04:45:24.652825 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-18 04:45:24.652842 | orchestrator | 2026-03-18 04:45:24.652860 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-18 04:45:24.652877 | orchestrator | Wednesday 18 March 2026 04:45:16 +0000 (0:00:02.226) 0:01:47.811 ******* 2026-03-18 04:45:24.652894 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.652910 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.652928 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.652947 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.652966 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.652983 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653000 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653011 | orchestrator | 2026-03-18 04:45:24.653022 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-18 04:45:24.653033 | orchestrator | Wednesday 18 March 2026 04:45:16 +0000 (0:00:00.748) 0:01:48.559 ******* 2026-03-18 04:45:24.653043 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653054 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653064 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653075 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653085 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653095 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653117 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653127 | orchestrator | 2026-03-18 04:45:24.653138 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-18 04:45:24.653149 | orchestrator | Wednesday 18 March 2026 04:45:17 +0000 (0:00:01.018) 0:01:49.578 ******* 2026-03-18 04:45:24.653159 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653170 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653180 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653222 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653235 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653246 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653256 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653267 | orchestrator | 2026-03-18 04:45:24.653278 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-18 04:45:24.653288 | orchestrator | Wednesday 18 March 2026 04:45:18 +0000 (0:00:00.814) 0:01:50.392 ******* 2026-03-18 04:45:24.653299 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653310 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653320 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653331 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653341 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653352 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653362 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653372 | orchestrator | 2026-03-18 04:45:24.653383 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-18 04:45:24.653394 | orchestrator | Wednesday 18 March 2026 04:45:19 +0000 (0:00:01.060) 0:01:51.453 ******* 2026-03-18 04:45:24.653404 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653414 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653425 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653435 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653446 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653456 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653467 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653478 | orchestrator | 2026-03-18 04:45:24.653488 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-18 04:45:24.653499 | orchestrator | Wednesday 18 March 2026 04:45:20 +0000 (0:00:00.848) 0:01:52.301 ******* 2026-03-18 04:45:24.653510 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653520 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653530 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653541 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653551 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653562 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653572 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653583 | orchestrator | 2026-03-18 04:45:24.653594 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-18 04:45:24.653611 | orchestrator | Wednesday 18 March 2026 04:45:21 +0000 (0:00:01.091) 0:01:53.392 ******* 2026-03-18 04:45:24.653622 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653632 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653643 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653653 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653664 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653674 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653684 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653695 | orchestrator | 2026-03-18 04:45:24.653705 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-18 04:45:24.653716 | orchestrator | Wednesday 18 March 2026 04:45:22 +0000 (0:00:00.774) 0:01:54.167 ******* 2026-03-18 04:45:24.653727 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653737 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653754 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653765 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653776 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653786 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653797 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653807 | orchestrator | 2026-03-18 04:45:24.653818 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-18 04:45:24.653828 | orchestrator | Wednesday 18 March 2026 04:45:23 +0000 (0:00:01.085) 0:01:55.252 ******* 2026-03-18 04:45:24.653839 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:24.653850 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:24.653860 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:24.653870 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:24.653881 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:24.653891 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:24.653902 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:24.653913 | orchestrator | 2026-03-18 04:45:24.653931 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-18 04:45:36.501408 | orchestrator | Wednesday 18 March 2026 04:45:24 +0000 (0:00:00.998) 0:01:56.251 ******* 2026-03-18 04:45:36.501512 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.501527 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.501538 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.501547 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.501557 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.501567 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.501577 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.501587 | orchestrator | 2026-03-18 04:45:36.501597 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-18 04:45:36.501608 | orchestrator | Wednesday 18 March 2026 04:45:25 +0000 (0:00:00.816) 0:01:57.067 ******* 2026-03-18 04:45:36.501618 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.501627 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.501637 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.501647 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.501656 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.501666 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.501676 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.501685 | orchestrator | 2026-03-18 04:45:36.501695 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-18 04:45:36.501705 | orchestrator | Wednesday 18 March 2026 04:45:26 +0000 (0:00:01.027) 0:01:58.094 ******* 2026-03-18 04:45:36.501715 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.501725 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.501735 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.501745 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.501754 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.501764 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.501773 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.501783 | orchestrator | 2026-03-18 04:45:36.501793 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-18 04:45:36.501803 | orchestrator | Wednesday 18 March 2026 04:45:27 +0000 (0:00:00.762) 0:01:58.857 ******* 2026-03-18 04:45:36.501812 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.501822 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.501832 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.501843 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 04:45:36.501854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 04:45:36.501864 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.501901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 04:45:36.501911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 04:45:36.501921 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.501931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 04:45:36.501941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 04:45:36.501952 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.501963 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.501974 | orchestrator | 2026-03-18 04:45:36.501985 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-18 04:45:36.501997 | orchestrator | Wednesday 18 March 2026 04:45:28 +0000 (0:00:01.134) 0:01:59.991 ******* 2026-03-18 04:45:36.502008 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502077 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502102 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502114 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502125 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502136 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502147 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502157 | orchestrator | 2026-03-18 04:45:36.502170 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-18 04:45:36.502181 | orchestrator | Wednesday 18 March 2026 04:45:29 +0000 (0:00:00.753) 0:02:00.745 ******* 2026-03-18 04:45:36.502191 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502220 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502241 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502252 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502263 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502274 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502285 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502296 | orchestrator | 2026-03-18 04:45:36.502306 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-18 04:45:36.502316 | orchestrator | Wednesday 18 March 2026 04:45:30 +0000 (0:00:01.091) 0:02:01.836 ******* 2026-03-18 04:45:36.502326 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502335 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502345 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502354 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502364 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502373 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502382 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502392 | orchestrator | 2026-03-18 04:45:36.502402 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-18 04:45:36.502412 | orchestrator | Wednesday 18 March 2026 04:45:31 +0000 (0:00:00.803) 0:02:02.640 ******* 2026-03-18 04:45:36.502437 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502448 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502457 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502466 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502476 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502485 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502495 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502504 | orchestrator | 2026-03-18 04:45:36.502514 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-18 04:45:36.502523 | orchestrator | Wednesday 18 March 2026 04:45:32 +0000 (0:00:01.024) 0:02:03.665 ******* 2026-03-18 04:45:36.502542 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502551 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502561 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502570 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502579 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502589 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502598 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502608 | orchestrator | 2026-03-18 04:45:36.502617 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-18 04:45:36.502627 | orchestrator | Wednesday 18 March 2026 04:45:33 +0000 (0:00:00.995) 0:02:04.660 ******* 2026-03-18 04:45:36.502636 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502646 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502655 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502665 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502674 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502684 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502693 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502702 | orchestrator | 2026-03-18 04:45:36.502712 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-18 04:45:36.502722 | orchestrator | Wednesday 18 March 2026 04:45:33 +0000 (0:00:00.788) 0:02:05.449 ******* 2026-03-18 04:45:36.502731 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:36.502741 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:36.502751 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:36.502760 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:36.502770 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:45:36.502780 | orchestrator | 2026-03-18 04:45:36.502790 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-18 04:45:36.502799 | orchestrator | Wednesday 18 March 2026 04:45:35 +0000 (0:00:01.623) 0:02:07.072 ******* 2026-03-18 04:45:36.502809 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:45:36.502820 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:45:36.502829 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:45:36.502839 | orchestrator | 2026-03-18 04:45:36.502849 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-18 04:45:36.502858 | orchestrator | Wednesday 18 March 2026 04:45:35 +0000 (0:00:00.392) 0:02:07.465 ******* 2026-03-18 04:45:36.502868 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 04:45:36.502878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 04:45:36.502888 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.502897 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 04:45:36.502907 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 04:45:36.502917 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:36.502926 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 04:45:36.502941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 04:45:36.502951 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:36.502961 | orchestrator | 2026-03-18 04:45:36.502970 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-18 04:45:36.502986 | orchestrator | Wednesday 18 March 2026 04:45:36 +0000 (0:00:00.393) 0:02:07.858 ******* 2026-03-18 04:45:36.502997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:36.503009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:36.503019 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:36.503035 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.767645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.767749 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:39.767768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.767781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.767792 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:39.767804 | orchestrator | 2026-03-18 04:45:39.767816 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-18 04:45:39.767828 | orchestrator | Wednesday 18 March 2026 04:45:36 +0000 (0:00:00.639) 0:02:08.498 ******* 2026-03-18 04:45:39.767839 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:39.767850 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:39.767860 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:39.767871 | orchestrator | 2026-03-18 04:45:39.767882 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-18 04:45:39.767893 | orchestrator | Wednesday 18 March 2026 04:45:37 +0000 (0:00:00.343) 0:02:08.841 ******* 2026-03-18 04:45:39.767903 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:39.767914 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:39.767925 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:39.767935 | orchestrator | 2026-03-18 04:45:39.767946 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-18 04:45:39.767956 | orchestrator | Wednesday 18 March 2026 04:45:37 +0000 (0:00:00.333) 0:02:09.175 ******* 2026-03-18 04:45:39.767967 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:39.767979 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:39.767990 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:39.768000 | orchestrator | 2026-03-18 04:45:39.768011 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-18 04:45:39.768022 | orchestrator | Wednesday 18 March 2026 04:45:37 +0000 (0:00:00.313) 0:02:09.488 ******* 2026-03-18 04:45:39.768057 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:39.768069 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:39.768080 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:39.768090 | orchestrator | 2026-03-18 04:45:39.768101 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-18 04:45:39.768112 | orchestrator | Wednesday 18 March 2026 04:45:38 +0000 (0:00:00.330) 0:02:09.819 ******* 2026-03-18 04:45:39.768123 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}) 2026-03-18 04:45:39.768135 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}) 2026-03-18 04:45:39.768178 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}) 2026-03-18 04:45:39.768192 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}) 2026-03-18 04:45:39.768283 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}) 2026-03-18 04:45:39.768297 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}) 2026-03-18 04:45:39.768310 | orchestrator | 2026-03-18 04:45:39.768323 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-18 04:45:39.768338 | orchestrator | Wednesday 18 March 2026 04:45:39 +0000 (0:00:01.319) 0:02:11.138 ******* 2026-03-18 04:45:39.768377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb/osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1773801539.2155814, 'mtime': 1773801539.2115815, 'ctime': 1773801539.2115815, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb/osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.768396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a/osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1773801558.2348719, 'mtime': 1773801558.2288718, 'ctime': 1773801558.2288718, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a/osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.768419 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:39.768437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af/osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1773801539.4729843, 'mtime': 1773801539.4669843, 'ctime': 1773801539.4669843, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af/osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:39.768457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d/osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1773801558.1962361, 'mtime': 1773801558.1892362, 'ctime': 1773801558.1892362, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d/osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490474 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:41.490581 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-def37aef-ab10-5729-81f7-b9371c5efcea/osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1773801539.2228045, 'mtime': 1773801539.2188046, 'ctime': 1773801539.2188046, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-def37aef-ab10-5729-81f7-b9371c5efcea/osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f/osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1773801557.9170866, 'mtime': 1773801557.9130864, 'ctime': 1773801557.9130864, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f/osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490656 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:41.490668 | orchestrator | 2026-03-18 04:45:41.490680 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-18 04:45:41.490692 | orchestrator | Wednesday 18 March 2026 04:45:39 +0000 (0:00:00.404) 0:02:11.543 ******* 2026-03-18 04:45:41.490705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 04:45:41.490718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 04:45:41.490729 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:41.490740 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 04:45:41.490751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 04:45:41.490761 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:41.490772 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 04:45:41.490783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 04:45:41.490794 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:41.490804 | orchestrator | 2026-03-18 04:45:41.490816 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-18 04:45:41.490845 | orchestrator | Wednesday 18 March 2026 04:45:40 +0000 (0:00:00.411) 0:02:11.955 ******* 2026-03-18 04:45:41.490858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490891 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:41.490902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490924 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:41.490935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.490957 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:41.490968 | orchestrator | 2026-03-18 04:45:41.490986 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-18 04:45:41.491006 | orchestrator | Wednesday 18 March 2026 04:45:40 +0000 (0:00:00.393) 0:02:12.348 ******* 2026-03-18 04:45:41.491033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'})  2026-03-18 04:45:41.491057 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'})  2026-03-18 04:45:41.491076 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:41.491095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'})  2026-03-18 04:45:41.491113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'})  2026-03-18 04:45:41.491133 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:41.491152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'})  2026-03-18 04:45:41.491171 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'})  2026-03-18 04:45:41.491191 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:41.491237 | orchestrator | 2026-03-18 04:45:41.491256 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-18 04:45:41.491275 | orchestrator | Wednesday 18 March 2026 04:45:41 +0000 (0:00:00.630) 0:02:12.979 ******* 2026-03-18 04:45:41.491295 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-dcb28020-3d32-5af4-a4b7-0acc667eefcb', 'data_vg': 'ceph-dcb28020-3d32-5af4-a4b7-0acc667eefcb'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:41.491342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-9a3797da-ebdd-566a-aa35-3713ec7e039a', 'data_vg': 'ceph-9a3797da-ebdd-566a-aa35-3713ec7e039a'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:45.550571 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:45.550675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-d0e002fd-9a73-564c-a03c-ee3a79d477af', 'data_vg': 'ceph-d0e002fd-9a73-564c-a03c-ee3a79d477af'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:45.550692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-ab16e1e8-130f-595d-96ba-aeefaeb1133d', 'data_vg': 'ceph-ab16e1e8-130f-595d-96ba-aeefaeb1133d'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:45.550704 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:45.550716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-def37aef-ab10-5729-81f7-b9371c5efcea', 'data_vg': 'ceph-def37aef-ab10-5729-81f7-b9371c5efcea'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:45.550728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f498c8c9-64fb-5c46-ab13-dfed2090c41f', 'data_vg': 'ceph-f498c8c9-64fb-5c46-ab13-dfed2090c41f'}, 'ansible_loop_var': 'item'})  2026-03-18 04:45:45.550738 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:45.550750 | orchestrator | 2026-03-18 04:45:45.550762 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-18 04:45:45.550775 | orchestrator | Wednesday 18 March 2026 04:45:41 +0000 (0:00:00.402) 0:02:13.381 ******* 2026-03-18 04:45:45.550786 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:45.550797 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:45.550808 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:45.550819 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:45.550830 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:45.550841 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:45.550851 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:45.550862 | orchestrator | 2026-03-18 04:45:45.550873 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-18 04:45:45.550900 | orchestrator | Wednesday 18 March 2026 04:45:42 +0000 (0:00:00.760) 0:02:14.142 ******* 2026-03-18 04:45:45.550912 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:45.550923 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:45.550934 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:45.550945 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:45.550957 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 04:45:45.550968 | orchestrator | 2026-03-18 04:45:45.550979 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-18 04:45:45.550990 | orchestrator | Wednesday 18 March 2026 04:45:44 +0000 (0:00:01.589) 0:02:15.731 ******* 2026-03-18 04:45:45.551002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551085 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:45.551096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551158 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:45.551170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551277 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:45.551290 | orchestrator | 2026-03-18 04:45:45.551303 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-18 04:45:45.551316 | orchestrator | Wednesday 18 March 2026 04:45:44 +0000 (0:00:00.442) 0:02:16.174 ******* 2026-03-18 04:45:45.551329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551390 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:45.551403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551477 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:45.551488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551543 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:45.551553 | orchestrator | 2026-03-18 04:45:45.551564 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-18 04:45:45.551575 | orchestrator | Wednesday 18 March 2026 04:45:45 +0000 (0:00:00.713) 0:02:16.887 ******* 2026-03-18 04:45:45.551586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551639 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:45.551650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:45.551678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959412 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 04:45:52.959473 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959480 | orchestrator | 2026-03-18 04:45:52.959487 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-18 04:45:52.959495 | orchestrator | Wednesday 18 March 2026 04:45:45 +0000 (0:00:00.461) 0:02:17.349 ******* 2026-03-18 04:45:52.959501 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959506 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959512 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959517 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959523 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959529 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959534 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959541 | orchestrator | 2026-03-18 04:45:52.959548 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-18 04:45:52.959554 | orchestrator | Wednesday 18 March 2026 04:45:46 +0000 (0:00:00.841) 0:02:18.191 ******* 2026-03-18 04:45:52.959560 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959565 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959571 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959577 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959583 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959590 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959595 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959601 | orchestrator | 2026-03-18 04:45:52.959620 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-18 04:45:52.959627 | orchestrator | Wednesday 18 March 2026 04:45:47 +0000 (0:00:01.040) 0:02:19.232 ******* 2026-03-18 04:45:52.959633 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959639 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959646 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959651 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959657 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959662 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959668 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959673 | orchestrator | 2026-03-18 04:45:52.959679 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-18 04:45:52.959685 | orchestrator | Wednesday 18 March 2026 04:45:48 +0000 (0:00:01.021) 0:02:20.254 ******* 2026-03-18 04:45:52.959690 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959696 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959702 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959707 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959713 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959720 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959725 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959731 | orchestrator | 2026-03-18 04:45:52.959737 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-18 04:45:52.959743 | orchestrator | Wednesday 18 March 2026 04:45:49 +0000 (0:00:00.786) 0:02:21.040 ******* 2026-03-18 04:45:52.959749 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959755 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959761 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959767 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959774 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959779 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959785 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959792 | orchestrator | 2026-03-18 04:45:52.959798 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-18 04:45:52.959803 | orchestrator | Wednesday 18 March 2026 04:45:50 +0000 (0:00:01.072) 0:02:22.112 ******* 2026-03-18 04:45:52.959809 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959815 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959828 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959835 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959841 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959846 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959852 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959858 | orchestrator | 2026-03-18 04:45:52.959864 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-18 04:45:52.959871 | orchestrator | Wednesday 18 March 2026 04:45:51 +0000 (0:00:00.760) 0:02:22.873 ******* 2026-03-18 04:45:52.959877 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.959883 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.959889 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.959895 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:52.959901 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:52.959907 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:52.959913 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:52.959919 | orchestrator | 2026-03-18 04:45:52.959942 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-18 04:45:52.959948 | orchestrator | Wednesday 18 March 2026 04:45:52 +0000 (0:00:01.042) 0:02:23.916 ******* 2026-03-18 04:45:52.959954 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:52.959961 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:52.959969 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:52.959977 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:52.959984 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:52.959992 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:52.959998 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:52.960005 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:52.960011 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:52.960017 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:52.960028 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:52.960035 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:52.960040 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:52.960046 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:52.960051 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:52.960064 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:52.960070 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:52.960076 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:52.960082 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:52.960087 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:52.960094 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:52.960100 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:52.960107 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:52.960119 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152253 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152341 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152356 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152368 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152376 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152385 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152395 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152404 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152413 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:55.152423 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152447 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152474 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152483 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152492 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152500 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152509 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152518 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152526 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:55.152535 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152543 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:55.152552 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152561 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152569 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152578 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152587 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:55.152596 | orchestrator | 2026-03-18 04:45:55.152618 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-18 04:45:55.152629 | orchestrator | Wednesday 18 March 2026 04:45:53 +0000 (0:00:01.169) 0:02:25.086 ******* 2026-03-18 04:45:55.152638 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:55.152647 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:55.152656 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:55.152665 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:45:55.152673 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:45:55.152682 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:45:55.152690 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:45:55.152699 | orchestrator | 2026-03-18 04:45:55.152707 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-18 04:45:55.152716 | orchestrator | Wednesday 18 March 2026 04:45:54 +0000 (0:00:01.025) 0:02:26.111 ******* 2026-03-18 04:45:55.152725 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152734 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152742 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152757 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152766 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152776 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152787 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152801 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152812 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152822 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152831 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152841 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152851 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:45:55.152861 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:45:55.152871 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:45:55.152881 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:45:55.152891 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:45:55.152900 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:45:55.152910 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:45:55.152920 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:45:55.152930 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:45:55.152949 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:46:05.426380 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:46:05.426505 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:46:05.426555 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:46:05.426573 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:46:05.426588 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:46:05.426603 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:46:05.426621 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:46:05.426636 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:46:05.426651 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:46:05.426667 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.426700 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:46:05.426716 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:46:05.426731 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-18 04:46:05.426746 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:46:05.426761 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:46:05.426775 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-18 04:46:05.426790 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:46:05.426805 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:46:05.426820 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:46:05.426835 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:46:05.426851 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.426867 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.426884 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-18 04:46:05.426909 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-18 04:46:05.426942 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-18 04:46:05.426956 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-18 04:46:05.426970 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.426983 | orchestrator | 2026-03-18 04:46:05.426998 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-18 04:46:05.427013 | orchestrator | Wednesday 18 March 2026 04:45:55 +0000 (0:00:01.174) 0:02:27.285 ******* 2026-03-18 04:46:05.427027 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:05.427040 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:05.427055 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:05.427069 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.427082 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.427096 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.427109 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.427122 | orchestrator | 2026-03-18 04:46:05.427136 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-18 04:46:05.427153 | orchestrator | Wednesday 18 March 2026 04:45:56 +0000 (0:00:01.041) 0:02:28.327 ******* 2026-03-18 04:46:05.427166 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:05.427179 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:05.427192 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:05.427206 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.427220 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.427263 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.427276 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.427289 | orchestrator | 2026-03-18 04:46:05.427301 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-18 04:46:05.427315 | orchestrator | Wednesday 18 March 2026 04:45:57 +0000 (0:00:00.961) 0:02:29.289 ******* 2026-03-18 04:46:05.427328 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:05.427341 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:05.427355 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:05.427368 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.427383 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.427397 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.427410 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.427423 | orchestrator | 2026-03-18 04:46:05.427436 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-18 04:46:05.427461 | orchestrator | Wednesday 18 March 2026 04:45:59 +0000 (0:00:01.523) 0:02:30.812 ******* 2026-03-18 04:46:05.427476 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-18 04:46:05.427493 | orchestrator | 2026-03-18 04:46:05.427506 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-18 04:46:05.427520 | orchestrator | Wednesday 18 March 2026 04:46:01 +0000 (0:00:01.925) 0:02:32.738 ******* 2026-03-18 04:46:05.427533 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427548 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427562 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427589 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427603 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427616 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427629 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-18 04:46:05.427643 | orchestrator | 2026-03-18 04:46:05.427657 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-18 04:46:05.427672 | orchestrator | Wednesday 18 March 2026 04:46:02 +0000 (0:00:00.939) 0:02:33.678 ******* 2026-03-18 04:46:05.427687 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:05.427700 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:05.427714 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:05.427727 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.427740 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.427755 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.427769 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.427784 | orchestrator | 2026-03-18 04:46:05.427799 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-18 04:46:05.427816 | orchestrator | Wednesday 18 March 2026 04:46:03 +0000 (0:00:01.097) 0:02:34.776 ******* 2026-03-18 04:46:05.427830 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:05.427843 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:05.427855 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:05.427867 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:05.427879 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:05.427892 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:05.427902 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:05.427913 | orchestrator | 2026-03-18 04:46:05.427925 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-18 04:46:05.427937 | orchestrator | Wednesday 18 March 2026 04:46:03 +0000 (0:00:00.806) 0:02:35.583 ******* 2026-03-18 04:46:05.427949 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:46:05.427963 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:05.427975 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:46:05.427987 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:46:05.427998 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:46:05.428009 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:46:05.428032 | orchestrator | ok: [testbed-manager] 2026-03-18 04:46:28.004026 | orchestrator | 2026-03-18 04:46:28.004135 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-18 04:46:28.004149 | orchestrator | Wednesday 18 March 2026 04:46:05 +0000 (0:00:01.439) 0:02:37.022 ******* 2026-03-18 04:46:28.004156 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:28.004164 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:28.004170 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:28.004176 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:28.004180 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:28.004184 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:28.004188 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:28.004192 | orchestrator | 2026-03-18 04:46:28.004196 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-18 04:46:28.004200 | orchestrator | Wednesday 18 March 2026 04:46:06 +0000 (0:00:01.539) 0:02:38.562 ******* 2026-03-18 04:46:28.004204 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:28.004208 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:46:28.004212 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:46:28.004216 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:46:28.004219 | orchestrator | skipping: [testbed-node-4] 2026-03-18 04:46:28.004223 | orchestrator | skipping: [testbed-node-5] 2026-03-18 04:46:28.004226 | orchestrator | skipping: [testbed-manager] 2026-03-18 04:46:28.004230 | orchestrator | 2026-03-18 04:46:28.004234 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-18 04:46:28.004275 | orchestrator | Wednesday 18 March 2026 04:46:08 +0000 (0:00:01.541) 0:02:40.104 ******* 2026-03-18 04:46:28.004281 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004285 | orchestrator | 2026-03-18 04:46:28.004289 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-18 04:46:28.004293 | orchestrator | Wednesday 18 March 2026 04:46:10 +0000 (0:00:01.660) 0:02:41.765 ******* 2026-03-18 04:46:28.004297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:28.004300 | orchestrator | 2026-03-18 04:46:28.004304 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-18 04:46:28.004308 | orchestrator | 2026-03-18 04:46:28.004311 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:46:28.004315 | orchestrator | Wednesday 18 March 2026 04:46:11 +0000 (0:00:00.931) 0:02:42.696 ******* 2026-03-18 04:46:28.004319 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004323 | orchestrator | 2026-03-18 04:46:28.004326 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:46:28.004331 | orchestrator | Wednesday 18 March 2026 04:46:11 +0000 (0:00:00.441) 0:02:43.138 ******* 2026-03-18 04:46:28.004335 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004339 | orchestrator | 2026-03-18 04:46:28.004353 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-18 04:46:28.004356 | orchestrator | Wednesday 18 March 2026 04:46:11 +0000 (0:00:00.466) 0:02:43.605 ******* 2026-03-18 04:46:28.004362 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:46:28.004369 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:46:28.004373 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-18 04:46:28.004376 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-18 04:46:28.004382 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-18 04:46:28.004398 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}])  2026-03-18 04:46:28.004409 | orchestrator | 2026-03-18 04:46:28.004413 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-18 04:46:28.004417 | orchestrator | 2026-03-18 04:46:28.004420 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-18 04:46:28.004424 | orchestrator | Wednesday 18 March 2026 04:46:21 +0000 (0:00:09.386) 0:02:52.991 ******* 2026-03-18 04:46:28.004428 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004431 | orchestrator | 2026-03-18 04:46:28.004435 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-18 04:46:28.004439 | orchestrator | Wednesday 18 March 2026 04:46:21 +0000 (0:00:00.491) 0:02:53.482 ******* 2026-03-18 04:46:28.004442 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004446 | orchestrator | 2026-03-18 04:46:28.004450 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-18 04:46:28.004453 | orchestrator | Wednesday 18 March 2026 04:46:22 +0000 (0:00:00.155) 0:02:53.638 ******* 2026-03-18 04:46:28.004457 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:28.004461 | orchestrator | 2026-03-18 04:46:28.004465 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-18 04:46:28.004468 | orchestrator | Wednesday 18 March 2026 04:46:22 +0000 (0:00:00.140) 0:02:53.778 ******* 2026-03-18 04:46:28.004472 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004476 | orchestrator | 2026-03-18 04:46:28.004479 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:46:28.004483 | orchestrator | Wednesday 18 March 2026 04:46:22 +0000 (0:00:00.141) 0:02:53.920 ******* 2026-03-18 04:46:28.004487 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-18 04:46:28.004490 | orchestrator | 2026-03-18 04:46:28.004494 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:46:28.004498 | orchestrator | Wednesday 18 March 2026 04:46:22 +0000 (0:00:00.260) 0:02:54.181 ******* 2026-03-18 04:46:28.004502 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004505 | orchestrator | 2026-03-18 04:46:28.004509 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:46:28.004513 | orchestrator | Wednesday 18 March 2026 04:46:23 +0000 (0:00:00.495) 0:02:54.677 ******* 2026-03-18 04:46:28.004516 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004520 | orchestrator | 2026-03-18 04:46:28.004524 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:46:28.004530 | orchestrator | Wednesday 18 March 2026 04:46:23 +0000 (0:00:00.134) 0:02:54.812 ******* 2026-03-18 04:46:28.004534 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004538 | orchestrator | 2026-03-18 04:46:28.004541 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:46:28.004545 | orchestrator | Wednesday 18 March 2026 04:46:23 +0000 (0:00:00.466) 0:02:55.278 ******* 2026-03-18 04:46:28.004549 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004552 | orchestrator | 2026-03-18 04:46:28.004556 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:46:28.004560 | orchestrator | Wednesday 18 March 2026 04:46:24 +0000 (0:00:00.401) 0:02:55.680 ******* 2026-03-18 04:46:28.004564 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004567 | orchestrator | 2026-03-18 04:46:28.004572 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:46:28.004576 | orchestrator | Wednesday 18 March 2026 04:46:24 +0000 (0:00:00.161) 0:02:55.841 ******* 2026-03-18 04:46:28.004581 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004585 | orchestrator | 2026-03-18 04:46:28.004589 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:46:28.004594 | orchestrator | Wednesday 18 March 2026 04:46:24 +0000 (0:00:00.191) 0:02:56.032 ******* 2026-03-18 04:46:28.004598 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:28.004602 | orchestrator | 2026-03-18 04:46:28.004607 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:46:28.004611 | orchestrator | Wednesday 18 March 2026 04:46:24 +0000 (0:00:00.170) 0:02:56.203 ******* 2026-03-18 04:46:28.004618 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004623 | orchestrator | 2026-03-18 04:46:28.004627 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:46:28.004631 | orchestrator | Wednesday 18 March 2026 04:46:24 +0000 (0:00:00.151) 0:02:56.354 ******* 2026-03-18 04:46:28.004636 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:46:28.004641 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:46:28.004645 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:46:28.004649 | orchestrator | 2026-03-18 04:46:28.004654 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:46:28.004658 | orchestrator | Wednesday 18 March 2026 04:46:25 +0000 (0:00:00.677) 0:02:57.032 ******* 2026-03-18 04:46:28.004662 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:28.004667 | orchestrator | 2026-03-18 04:46:28.004671 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:46:28.004676 | orchestrator | Wednesday 18 March 2026 04:46:25 +0000 (0:00:00.259) 0:02:57.292 ******* 2026-03-18 04:46:28.004680 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:46:28.004684 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:46:28.004689 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:46:28.004693 | orchestrator | 2026-03-18 04:46:28.004698 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:46:28.004702 | orchestrator | Wednesday 18 March 2026 04:46:27 +0000 (0:00:01.894) 0:02:59.186 ******* 2026-03-18 04:46:28.004707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:46:28.004711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:46:28.004718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:46:34.348248 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348386 | orchestrator | 2026-03-18 04:46:34.348398 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:46:34.348408 | orchestrator | Wednesday 18 March 2026 04:46:27 +0000 (0:00:00.422) 0:02:59.608 ******* 2026-03-18 04:46:34.348416 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348434 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348441 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348449 | orchestrator | 2026-03-18 04:46:34.348457 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:46:34.348464 | orchestrator | Wednesday 18 March 2026 04:46:28 +0000 (0:00:00.968) 0:03:00.577 ******* 2026-03-18 04:46:34.348473 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348499 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.348530 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348538 | orchestrator | 2026-03-18 04:46:34.348545 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:46:34.348552 | orchestrator | Wednesday 18 March 2026 04:46:29 +0000 (0:00:00.192) 0:03:00.769 ******* 2026-03-18 04:46:34.348561 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'dfaa0207b10e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:46:26.233969', 'end': '2026-03-18 04:46:26.282723', 'delta': '0:00:00.048754', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['dfaa0207b10e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:46:34.348608 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1edfdf2d0145', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:46:26.809100', 'end': '2026-03-18 04:46:26.852435', 'delta': '0:00:00.043335', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1edfdf2d0145'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:46:34.348618 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'fc8e238828f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:46:27.370125', 'end': '2026-03-18 04:46:27.417843', 'delta': '0:00:00.047718', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc8e238828f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:46:34.348625 | orchestrator | 2026-03-18 04:46:34.348633 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:46:34.348640 | orchestrator | Wednesday 18 March 2026 04:46:29 +0000 (0:00:00.212) 0:03:00.982 ******* 2026-03-18 04:46:34.348647 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:34.348655 | orchestrator | 2026-03-18 04:46:34.348663 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:46:34.348670 | orchestrator | Wednesday 18 March 2026 04:46:29 +0000 (0:00:00.375) 0:03:01.357 ******* 2026-03-18 04:46:34.348677 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348690 | orchestrator | 2026-03-18 04:46:34.348698 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:46:34.348705 | orchestrator | Wednesday 18 March 2026 04:46:30 +0000 (0:00:00.891) 0:03:02.248 ******* 2026-03-18 04:46:34.348712 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:34.348719 | orchestrator | 2026-03-18 04:46:34.348726 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:46:34.348733 | orchestrator | Wednesday 18 March 2026 04:46:30 +0000 (0:00:00.173) 0:03:02.422 ******* 2026-03-18 04:46:34.348740 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-18 04:46:34.348748 | orchestrator | 2026-03-18 04:46:34.348755 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:46:34.348766 | orchestrator | Wednesday 18 March 2026 04:46:32 +0000 (0:00:01.459) 0:03:03.882 ******* 2026-03-18 04:46:34.348775 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:34.348783 | orchestrator | 2026-03-18 04:46:34.348791 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:46:34.348800 | orchestrator | Wednesday 18 March 2026 04:46:32 +0000 (0:00:00.156) 0:03:04.039 ******* 2026-03-18 04:46:34.348808 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348816 | orchestrator | 2026-03-18 04:46:34.348824 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:46:34.348832 | orchestrator | Wednesday 18 March 2026 04:46:32 +0000 (0:00:00.142) 0:03:04.181 ******* 2026-03-18 04:46:34.348840 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348849 | orchestrator | 2026-03-18 04:46:34.348857 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:46:34.348865 | orchestrator | Wednesday 18 March 2026 04:46:32 +0000 (0:00:00.236) 0:03:04.418 ******* 2026-03-18 04:46:34.348874 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348882 | orchestrator | 2026-03-18 04:46:34.348890 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:46:34.348898 | orchestrator | Wednesday 18 March 2026 04:46:32 +0000 (0:00:00.131) 0:03:04.549 ******* 2026-03-18 04:46:34.348906 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348915 | orchestrator | 2026-03-18 04:46:34.348923 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:46:34.348931 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.153) 0:03:04.703 ******* 2026-03-18 04:46:34.348939 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348947 | orchestrator | 2026-03-18 04:46:34.348955 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:46:34.348963 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.147) 0:03:04.851 ******* 2026-03-18 04:46:34.348971 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.348979 | orchestrator | 2026-03-18 04:46:34.348987 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:46:34.348996 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.152) 0:03:05.003 ******* 2026-03-18 04:46:34.349004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.349012 | orchestrator | 2026-03-18 04:46:34.349020 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:46:34.349028 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.144) 0:03:05.147 ******* 2026-03-18 04:46:34.349036 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.349044 | orchestrator | 2026-03-18 04:46:34.349053 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:46:34.349061 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.140) 0:03:05.288 ******* 2026-03-18 04:46:34.349070 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.349078 | orchestrator | 2026-03-18 04:46:34.349086 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:46:34.349095 | orchestrator | Wednesday 18 March 2026 04:46:33 +0000 (0:00:00.140) 0:03:05.428 ******* 2026-03-18 04:46:34.349115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.612989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:46:34.613149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:46:34.613282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:46:34.613317 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:34.613332 | orchestrator | 2026-03-18 04:46:34.613345 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:46:34.613358 | orchestrator | Wednesday 18 March 2026 04:46:34 +0000 (0:00:00.532) 0:03:05.960 ******* 2026-03-18 04:46:34.613372 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.613386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.613399 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:34.613431 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688803 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688836 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688886 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688967 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:46:38.688980 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.688994 | orchestrator | 2026-03-18 04:46:38.689006 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:46:38.689018 | orchestrator | Wednesday 18 March 2026 04:46:34 +0000 (0:00:00.260) 0:03:06.220 ******* 2026-03-18 04:46:38.689029 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:38.689040 | orchestrator | 2026-03-18 04:46:38.689051 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:46:38.689062 | orchestrator | Wednesday 18 March 2026 04:46:35 +0000 (0:00:00.509) 0:03:06.730 ******* 2026-03-18 04:46:38.689072 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:38.689083 | orchestrator | 2026-03-18 04:46:38.689094 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:46:38.689105 | orchestrator | Wednesday 18 March 2026 04:46:35 +0000 (0:00:00.135) 0:03:06.865 ******* 2026-03-18 04:46:38.689115 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:46:38.689126 | orchestrator | 2026-03-18 04:46:38.689137 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:46:38.689148 | orchestrator | Wednesday 18 March 2026 04:46:35 +0000 (0:00:00.488) 0:03:07.353 ******* 2026-03-18 04:46:38.689158 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.689176 | orchestrator | 2026-03-18 04:46:38.689187 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:46:38.689198 | orchestrator | Wednesday 18 March 2026 04:46:35 +0000 (0:00:00.153) 0:03:07.507 ******* 2026-03-18 04:46:38.689209 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.689219 | orchestrator | 2026-03-18 04:46:38.689230 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:46:38.689240 | orchestrator | Wednesday 18 March 2026 04:46:36 +0000 (0:00:00.262) 0:03:07.769 ******* 2026-03-18 04:46:38.689251 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.689294 | orchestrator | 2026-03-18 04:46:38.689307 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:46:38.689318 | orchestrator | Wednesday 18 March 2026 04:46:36 +0000 (0:00:00.173) 0:03:07.943 ******* 2026-03-18 04:46:38.689328 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:46:38.689340 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 04:46:38.689350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 04:46:38.689373 | orchestrator | 2026-03-18 04:46:38.689385 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:46:38.689395 | orchestrator | Wednesday 18 March 2026 04:46:37 +0000 (0:00:00.931) 0:03:08.875 ******* 2026-03-18 04:46:38.689406 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:46:38.689417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:46:38.689428 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:46:38.689438 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.689449 | orchestrator | 2026-03-18 04:46:38.689460 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:46:38.689470 | orchestrator | Wednesday 18 March 2026 04:46:37 +0000 (0:00:00.180) 0:03:09.056 ******* 2026-03-18 04:46:38.689481 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:46:38.689491 | orchestrator | 2026-03-18 04:46:38.689502 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:46:38.689512 | orchestrator | Wednesday 18 March 2026 04:46:37 +0000 (0:00:00.144) 0:03:09.200 ******* 2026-03-18 04:46:38.689523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:46:38.689533 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:46:38.689545 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:46:38.689555 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:46:38.689566 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:46:38.689585 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:47:07.970145 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:47:07.970360 | orchestrator | 2026-03-18 04:47:07.970398 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:47:07.970421 | orchestrator | Wednesday 18 March 2026 04:46:38 +0000 (0:00:01.094) 0:03:10.295 ******* 2026-03-18 04:47:07.970442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:47:07.970464 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:47:07.970483 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:47:07.970503 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:47:07.970522 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:47:07.970542 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:47:07.970563 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:47:07.970612 | orchestrator | 2026-03-18 04:47:07.970626 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-18 04:47:07.970639 | orchestrator | Wednesday 18 March 2026 04:46:40 +0000 (0:00:01.960) 0:03:12.256 ******* 2026-03-18 04:47:07.970652 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-18 04:47:07.970665 | orchestrator | 2026-03-18 04:47:07.970678 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-18 04:47:07.970705 | orchestrator | Wednesday 18 March 2026 04:46:41 +0000 (0:00:01.229) 0:03:13.486 ******* 2026-03-18 04:47:07.970717 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.970728 | orchestrator | 2026-03-18 04:47:07.970739 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-18 04:47:07.970749 | orchestrator | Wednesday 18 March 2026 04:46:42 +0000 (0:00:00.243) 0:03:13.729 ******* 2026-03-18 04:47:07.970760 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.970771 | orchestrator | 2026-03-18 04:47:07.970782 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-18 04:47:07.970792 | orchestrator | Wednesday 18 March 2026 04:46:42 +0000 (0:00:00.160) 0:03:13.889 ******* 2026-03-18 04:47:07.970803 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-03-18 04:47:07.970820 | orchestrator | 2026-03-18 04:47:07.970837 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-18 04:47:07.970855 | orchestrator | Wednesday 18 March 2026 04:46:43 +0000 (0:00:01.349) 0:03:15.238 ******* 2026-03-18 04:47:07.970872 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.970889 | orchestrator | 2026-03-18 04:47:07.970907 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-18 04:47:07.970923 | orchestrator | Wednesday 18 March 2026 04:46:43 +0000 (0:00:00.155) 0:03:15.394 ******* 2026-03-18 04:47:07.970940 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:47:07.970958 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:47:07.970976 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:47:07.970994 | orchestrator | 2026-03-18 04:47:07.971012 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-18 04:47:07.971032 | orchestrator | Wednesday 18 March 2026 04:46:45 +0000 (0:00:01.546) 0:03:16.941 ******* 2026-03-18 04:47:07.971050 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-18 04:47:07.971068 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-18 04:47:07.971081 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-18 04:47:07.971092 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-18 04:47:07.971103 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-18 04:47:07.971114 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-18 04:47:07.971125 | orchestrator | 2026-03-18 04:47:07.971136 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-18 04:47:07.971147 | orchestrator | Wednesday 18 March 2026 04:46:57 +0000 (0:00:11.841) 0:03:28.782 ******* 2026-03-18 04:47:07.971157 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:47:07.971168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:47:07.971179 | orchestrator | 2026-03-18 04:47:07.971190 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-18 04:47:07.971201 | orchestrator | Wednesday 18 March 2026 04:46:59 +0000 (0:00:02.802) 0:03:31.585 ******* 2026-03-18 04:47:07.971212 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:47:07.971234 | orchestrator | 2026-03-18 04:47:07.971245 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:47:07.971256 | orchestrator | Wednesday 18 March 2026 04:47:01 +0000 (0:00:01.546) 0:03:33.131 ******* 2026-03-18 04:47:07.971266 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-18 04:47:07.971277 | orchestrator | 2026-03-18 04:47:07.971333 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:47:07.971346 | orchestrator | Wednesday 18 March 2026 04:47:02 +0000 (0:00:00.567) 0:03:33.698 ******* 2026-03-18 04:47:07.971382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-18 04:47:07.971395 | orchestrator | 2026-03-18 04:47:07.971406 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:47:07.971416 | orchestrator | Wednesday 18 March 2026 04:47:02 +0000 (0:00:00.892) 0:03:34.591 ******* 2026-03-18 04:47:07.971427 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.971438 | orchestrator | 2026-03-18 04:47:07.971450 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:47:07.971461 | orchestrator | Wednesday 18 March 2026 04:47:03 +0000 (0:00:00.521) 0:03:35.113 ******* 2026-03-18 04:47:07.971472 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971482 | orchestrator | 2026-03-18 04:47:07.971493 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:47:07.971503 | orchestrator | Wednesday 18 March 2026 04:47:03 +0000 (0:00:00.141) 0:03:35.254 ******* 2026-03-18 04:47:07.971514 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971525 | orchestrator | 2026-03-18 04:47:07.971536 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:47:07.971546 | orchestrator | Wednesday 18 March 2026 04:47:03 +0000 (0:00:00.134) 0:03:35.389 ******* 2026-03-18 04:47:07.971557 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971568 | orchestrator | 2026-03-18 04:47:07.971578 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:47:07.971589 | orchestrator | Wednesday 18 March 2026 04:47:03 +0000 (0:00:00.154) 0:03:35.544 ******* 2026-03-18 04:47:07.971600 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.971611 | orchestrator | 2026-03-18 04:47:07.971621 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:47:07.971632 | orchestrator | Wednesday 18 March 2026 04:47:04 +0000 (0:00:00.563) 0:03:36.107 ******* 2026-03-18 04:47:07.971643 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971653 | orchestrator | 2026-03-18 04:47:07.971673 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:47:07.971684 | orchestrator | Wednesday 18 March 2026 04:47:04 +0000 (0:00:00.133) 0:03:36.241 ******* 2026-03-18 04:47:07.971695 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971706 | orchestrator | 2026-03-18 04:47:07.971717 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:47:07.971728 | orchestrator | Wednesday 18 March 2026 04:47:04 +0000 (0:00:00.148) 0:03:36.389 ******* 2026-03-18 04:47:07.971739 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.971750 | orchestrator | 2026-03-18 04:47:07.971760 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:47:07.971771 | orchestrator | Wednesday 18 March 2026 04:47:05 +0000 (0:00:00.550) 0:03:36.940 ******* 2026-03-18 04:47:07.971782 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.971793 | orchestrator | 2026-03-18 04:47:07.971804 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:47:07.971814 | orchestrator | Wednesday 18 March 2026 04:47:05 +0000 (0:00:00.599) 0:03:37.539 ******* 2026-03-18 04:47:07.971825 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971836 | orchestrator | 2026-03-18 04:47:07.971847 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:47:07.971857 | orchestrator | Wednesday 18 March 2026 04:47:06 +0000 (0:00:00.139) 0:03:37.678 ******* 2026-03-18 04:47:07.971875 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.971886 | orchestrator | 2026-03-18 04:47:07.971897 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:47:07.971908 | orchestrator | Wednesday 18 March 2026 04:47:06 +0000 (0:00:00.165) 0:03:37.844 ******* 2026-03-18 04:47:07.971919 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971929 | orchestrator | 2026-03-18 04:47:07.971940 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:47:07.971951 | orchestrator | Wednesday 18 March 2026 04:47:06 +0000 (0:00:00.141) 0:03:37.985 ******* 2026-03-18 04:47:07.971962 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.971972 | orchestrator | 2026-03-18 04:47:07.971983 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:47:07.971994 | orchestrator | Wednesday 18 March 2026 04:47:06 +0000 (0:00:00.147) 0:03:38.133 ******* 2026-03-18 04:47:07.972005 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.972015 | orchestrator | 2026-03-18 04:47:07.972026 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:47:07.972037 | orchestrator | Wednesday 18 March 2026 04:47:06 +0000 (0:00:00.415) 0:03:38.548 ******* 2026-03-18 04:47:07.972047 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.972058 | orchestrator | 2026-03-18 04:47:07.972069 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:47:07.972080 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.140) 0:03:38.688 ******* 2026-03-18 04:47:07.972090 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.972101 | orchestrator | 2026-03-18 04:47:07.972112 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:47:07.972123 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.151) 0:03:38.840 ******* 2026-03-18 04:47:07.972133 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.972144 | orchestrator | 2026-03-18 04:47:07.972155 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:47:07.972165 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.151) 0:03:38.992 ******* 2026-03-18 04:47:07.972176 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.972187 | orchestrator | 2026-03-18 04:47:07.972198 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:47:07.972208 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.162) 0:03:39.154 ******* 2026-03-18 04:47:07.972219 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:07.972230 | orchestrator | 2026-03-18 04:47:07.972240 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:47:07.972251 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.257) 0:03:39.412 ******* 2026-03-18 04:47:07.972262 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:07.972272 | orchestrator | 2026-03-18 04:47:07.972283 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:47:07.972326 | orchestrator | Wednesday 18 March 2026 04:47:07 +0000 (0:00:00.160) 0:03:39.572 ******* 2026-03-18 04:47:21.684539 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684623 | orchestrator | 2026-03-18 04:47:21.684633 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:47:21.684640 | orchestrator | Wednesday 18 March 2026 04:47:08 +0000 (0:00:00.145) 0:03:39.718 ******* 2026-03-18 04:47:21.684645 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684650 | orchestrator | 2026-03-18 04:47:21.684656 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:47:21.684661 | orchestrator | Wednesday 18 March 2026 04:47:08 +0000 (0:00:00.136) 0:03:39.854 ******* 2026-03-18 04:47:21.684666 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684671 | orchestrator | 2026-03-18 04:47:21.684676 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:47:21.684682 | orchestrator | Wednesday 18 March 2026 04:47:08 +0000 (0:00:00.139) 0:03:39.994 ******* 2026-03-18 04:47:21.684703 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684709 | orchestrator | 2026-03-18 04:47:21.684714 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:47:21.684719 | orchestrator | Wednesday 18 March 2026 04:47:08 +0000 (0:00:00.138) 0:03:40.132 ******* 2026-03-18 04:47:21.684724 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684729 | orchestrator | 2026-03-18 04:47:21.684734 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:47:21.684739 | orchestrator | Wednesday 18 March 2026 04:47:08 +0000 (0:00:00.130) 0:03:40.263 ******* 2026-03-18 04:47:21.684744 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684749 | orchestrator | 2026-03-18 04:47:21.684754 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:47:21.684770 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.408) 0:03:40.672 ******* 2026-03-18 04:47:21.684775 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684780 | orchestrator | 2026-03-18 04:47:21.684785 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:47:21.684790 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.141) 0:03:40.813 ******* 2026-03-18 04:47:21.684795 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684800 | orchestrator | 2026-03-18 04:47:21.684805 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:47:21.684810 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.141) 0:03:40.954 ******* 2026-03-18 04:47:21.684816 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684821 | orchestrator | 2026-03-18 04:47:21.684827 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:47:21.684834 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.140) 0:03:41.095 ******* 2026-03-18 04:47:21.684842 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684850 | orchestrator | 2026-03-18 04:47:21.684856 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:47:21.684861 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.134) 0:03:41.230 ******* 2026-03-18 04:47:21.684866 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684871 | orchestrator | 2026-03-18 04:47:21.684876 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:47:21.684881 | orchestrator | Wednesday 18 March 2026 04:47:09 +0000 (0:00:00.210) 0:03:41.441 ******* 2026-03-18 04:47:21.684886 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.684892 | orchestrator | 2026-03-18 04:47:21.684897 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:47:21.684902 | orchestrator | Wednesday 18 March 2026 04:47:10 +0000 (0:00:00.956) 0:03:42.397 ******* 2026-03-18 04:47:21.684908 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.684913 | orchestrator | 2026-03-18 04:47:21.684918 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:47:21.684923 | orchestrator | Wednesday 18 March 2026 04:47:12 +0000 (0:00:01.509) 0:03:43.907 ******* 2026-03-18 04:47:21.684928 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-18 04:47:21.684934 | orchestrator | 2026-03-18 04:47:21.684939 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:47:21.684944 | orchestrator | Wednesday 18 March 2026 04:47:12 +0000 (0:00:00.586) 0:03:44.494 ******* 2026-03-18 04:47:21.684949 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684954 | orchestrator | 2026-03-18 04:47:21.684959 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:47:21.684964 | orchestrator | Wednesday 18 March 2026 04:47:13 +0000 (0:00:00.154) 0:03:44.649 ******* 2026-03-18 04:47:21.684970 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.684974 | orchestrator | 2026-03-18 04:47:21.684980 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:47:21.684990 | orchestrator | Wednesday 18 March 2026 04:47:13 +0000 (0:00:00.133) 0:03:44.782 ******* 2026-03-18 04:47:21.684996 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:47:21.685001 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:47:21.685006 | orchestrator | 2026-03-18 04:47:21.685011 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:47:21.685016 | orchestrator | Wednesday 18 March 2026 04:47:14 +0000 (0:00:01.213) 0:03:45.995 ******* 2026-03-18 04:47:21.685021 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.685026 | orchestrator | 2026-03-18 04:47:21.685031 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:47:21.685036 | orchestrator | Wednesday 18 March 2026 04:47:15 +0000 (0:00:00.725) 0:03:46.721 ******* 2026-03-18 04:47:21.685041 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685046 | orchestrator | 2026-03-18 04:47:21.685051 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:47:21.685057 | orchestrator | Wednesday 18 March 2026 04:47:15 +0000 (0:00:00.175) 0:03:46.897 ******* 2026-03-18 04:47:21.685062 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685067 | orchestrator | 2026-03-18 04:47:21.685082 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:47:21.685087 | orchestrator | Wednesday 18 March 2026 04:47:15 +0000 (0:00:00.150) 0:03:47.047 ******* 2026-03-18 04:47:21.685092 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685097 | orchestrator | 2026-03-18 04:47:21.685103 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:47:21.685109 | orchestrator | Wednesday 18 March 2026 04:47:15 +0000 (0:00:00.142) 0:03:47.190 ******* 2026-03-18 04:47:21.685114 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-18 04:47:21.685120 | orchestrator | 2026-03-18 04:47:21.685126 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:47:21.685132 | orchestrator | Wednesday 18 March 2026 04:47:16 +0000 (0:00:00.622) 0:03:47.813 ******* 2026-03-18 04:47:21.685137 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.685143 | orchestrator | 2026-03-18 04:47:21.685149 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:47:21.685155 | orchestrator | Wednesday 18 March 2026 04:47:16 +0000 (0:00:00.741) 0:03:48.554 ******* 2026-03-18 04:47:21.685161 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:47:21.685167 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:47:21.685173 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:47:21.685179 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685185 | orchestrator | 2026-03-18 04:47:21.685190 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:47:21.685199 | orchestrator | Wednesday 18 March 2026 04:47:17 +0000 (0:00:00.185) 0:03:48.739 ******* 2026-03-18 04:47:21.685205 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685211 | orchestrator | 2026-03-18 04:47:21.685217 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:47:21.685222 | orchestrator | Wednesday 18 March 2026 04:47:17 +0000 (0:00:00.158) 0:03:48.898 ******* 2026-03-18 04:47:21.685228 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685234 | orchestrator | 2026-03-18 04:47:21.685240 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:47:21.685246 | orchestrator | Wednesday 18 March 2026 04:47:17 +0000 (0:00:00.195) 0:03:49.093 ******* 2026-03-18 04:47:21.685251 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685257 | orchestrator | 2026-03-18 04:47:21.685263 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:47:21.685269 | orchestrator | Wednesday 18 March 2026 04:47:17 +0000 (0:00:00.162) 0:03:49.256 ******* 2026-03-18 04:47:21.685278 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685284 | orchestrator | 2026-03-18 04:47:21.685290 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:47:21.685296 | orchestrator | Wednesday 18 March 2026 04:47:17 +0000 (0:00:00.159) 0:03:49.415 ******* 2026-03-18 04:47:21.685319 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685325 | orchestrator | 2026-03-18 04:47:21.685331 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:47:21.685337 | orchestrator | Wednesday 18 March 2026 04:47:18 +0000 (0:00:00.442) 0:03:49.858 ******* 2026-03-18 04:47:21.685342 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.685348 | orchestrator | 2026-03-18 04:47:21.685355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:47:21.685363 | orchestrator | Wednesday 18 March 2026 04:47:19 +0000 (0:00:01.650) 0:03:51.509 ******* 2026-03-18 04:47:21.685371 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:21.685377 | orchestrator | 2026-03-18 04:47:21.685383 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:47:21.685388 | orchestrator | Wednesday 18 March 2026 04:47:20 +0000 (0:00:00.161) 0:03:51.671 ******* 2026-03-18 04:47:21.685394 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-18 04:47:21.685400 | orchestrator | 2026-03-18 04:47:21.685406 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:47:21.685411 | orchestrator | Wednesday 18 March 2026 04:47:20 +0000 (0:00:00.636) 0:03:52.308 ******* 2026-03-18 04:47:21.685418 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685423 | orchestrator | 2026-03-18 04:47:21.685429 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:47:21.685435 | orchestrator | Wednesday 18 March 2026 04:47:20 +0000 (0:00:00.172) 0:03:52.480 ******* 2026-03-18 04:47:21.685441 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685447 | orchestrator | 2026-03-18 04:47:21.685453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:47:21.685459 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.165) 0:03:52.646 ******* 2026-03-18 04:47:21.685464 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685470 | orchestrator | 2026-03-18 04:47:21.685476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:47:21.685481 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.158) 0:03:52.804 ******* 2026-03-18 04:47:21.685486 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685491 | orchestrator | 2026-03-18 04:47:21.685496 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:47:21.685502 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.151) 0:03:52.956 ******* 2026-03-18 04:47:21.685507 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685512 | orchestrator | 2026-03-18 04:47:21.685517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:47:21.685522 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.169) 0:03:53.125 ******* 2026-03-18 04:47:21.685527 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:21.685532 | orchestrator | 2026-03-18 04:47:21.685537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:47:21.685546 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.166) 0:03:53.291 ******* 2026-03-18 04:47:36.589628 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.589743 | orchestrator | 2026-03-18 04:47:36.589760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:47:36.589772 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.157) 0:03:53.449 ******* 2026-03-18 04:47:36.589784 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.589795 | orchestrator | 2026-03-18 04:47:36.589806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:47:36.589844 | orchestrator | Wednesday 18 March 2026 04:47:21 +0000 (0:00:00.159) 0:03:53.608 ******* 2026-03-18 04:47:36.589856 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:36.589867 | orchestrator | 2026-03-18 04:47:36.589878 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:47:36.589889 | orchestrator | Wednesday 18 March 2026 04:47:22 +0000 (0:00:00.521) 0:03:54.130 ******* 2026-03-18 04:47:36.589901 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-18 04:47:36.589912 | orchestrator | 2026-03-18 04:47:36.589923 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:47:36.589934 | orchestrator | Wednesday 18 March 2026 04:47:23 +0000 (0:00:00.587) 0:03:54.717 ******* 2026-03-18 04:47:36.589945 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-18 04:47:36.589956 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-18 04:47:36.589967 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-18 04:47:36.589977 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-18 04:47:36.589988 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-18 04:47:36.590014 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-18 04:47:36.590087 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-18 04:47:36.590098 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:47:36.590109 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:47:36.590120 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:47:36.590131 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:47:36.590141 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:47:36.590162 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:47:36.590175 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:47:36.590188 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-18 04:47:36.590200 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-18 04:47:36.590212 | orchestrator | 2026-03-18 04:47:36.590224 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:47:36.590237 | orchestrator | Wednesday 18 March 2026 04:47:28 +0000 (0:00:05.896) 0:04:00.613 ******* 2026-03-18 04:47:36.590250 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590262 | orchestrator | 2026-03-18 04:47:36.590274 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:47:36.590287 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.151) 0:04:00.764 ******* 2026-03-18 04:47:36.590299 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590346 | orchestrator | 2026-03-18 04:47:36.590359 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:47:36.590372 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.151) 0:04:00.916 ******* 2026-03-18 04:47:36.590384 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590396 | orchestrator | 2026-03-18 04:47:36.590408 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:47:36.590420 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.154) 0:04:01.071 ******* 2026-03-18 04:47:36.590434 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590446 | orchestrator | 2026-03-18 04:47:36.590458 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:47:36.590471 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.180) 0:04:01.252 ******* 2026-03-18 04:47:36.590483 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590496 | orchestrator | 2026-03-18 04:47:36.590508 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:47:36.590521 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.138) 0:04:01.390 ******* 2026-03-18 04:47:36.590541 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590551 | orchestrator | 2026-03-18 04:47:36.590562 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:47:36.590573 | orchestrator | Wednesday 18 March 2026 04:47:29 +0000 (0:00:00.152) 0:04:01.542 ******* 2026-03-18 04:47:36.590584 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590595 | orchestrator | 2026-03-18 04:47:36.590605 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:47:36.590616 | orchestrator | Wednesday 18 March 2026 04:47:30 +0000 (0:00:00.140) 0:04:01.683 ******* 2026-03-18 04:47:36.590627 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590637 | orchestrator | 2026-03-18 04:47:36.590648 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:47:36.590659 | orchestrator | Wednesday 18 March 2026 04:47:30 +0000 (0:00:00.140) 0:04:01.823 ******* 2026-03-18 04:47:36.590670 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590680 | orchestrator | 2026-03-18 04:47:36.590691 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:47:36.590702 | orchestrator | Wednesday 18 March 2026 04:47:30 +0000 (0:00:00.137) 0:04:01.960 ******* 2026-03-18 04:47:36.590713 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590723 | orchestrator | 2026-03-18 04:47:36.590734 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:47:36.590763 | orchestrator | Wednesday 18 March 2026 04:47:30 +0000 (0:00:00.395) 0:04:02.356 ******* 2026-03-18 04:47:36.590774 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590785 | orchestrator | 2026-03-18 04:47:36.590796 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:47:36.590806 | orchestrator | Wednesday 18 March 2026 04:47:30 +0000 (0:00:00.150) 0:04:02.507 ******* 2026-03-18 04:47:36.590817 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590828 | orchestrator | 2026-03-18 04:47:36.590838 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:47:36.590849 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.153) 0:04:02.660 ******* 2026-03-18 04:47:36.590860 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590870 | orchestrator | 2026-03-18 04:47:36.590881 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:47:36.590892 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.249) 0:04:02.909 ******* 2026-03-18 04:47:36.590902 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590913 | orchestrator | 2026-03-18 04:47:36.590924 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:47:36.590934 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.143) 0:04:03.053 ******* 2026-03-18 04:47:36.590945 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590956 | orchestrator | 2026-03-18 04:47:36.590966 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:47:36.590977 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.229) 0:04:03.283 ******* 2026-03-18 04:47:36.590987 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.590998 | orchestrator | 2026-03-18 04:47:36.591014 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:47:36.591026 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.131) 0:04:03.415 ******* 2026-03-18 04:47:36.591036 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591047 | orchestrator | 2026-03-18 04:47:36.591058 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:47:36.591070 | orchestrator | Wednesday 18 March 2026 04:47:31 +0000 (0:00:00.153) 0:04:03.568 ******* 2026-03-18 04:47:36.591080 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591098 | orchestrator | 2026-03-18 04:47:36.591109 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:47:36.591119 | orchestrator | Wednesday 18 March 2026 04:47:32 +0000 (0:00:00.149) 0:04:03.718 ******* 2026-03-18 04:47:36.591130 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591141 | orchestrator | 2026-03-18 04:47:36.591151 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:47:36.591162 | orchestrator | Wednesday 18 March 2026 04:47:32 +0000 (0:00:00.161) 0:04:03.879 ******* 2026-03-18 04:47:36.591172 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591183 | orchestrator | 2026-03-18 04:47:36.591194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:47:36.591205 | orchestrator | Wednesday 18 March 2026 04:47:32 +0000 (0:00:00.143) 0:04:04.022 ******* 2026-03-18 04:47:36.591215 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591226 | orchestrator | 2026-03-18 04:47:36.591236 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:47:36.591247 | orchestrator | Wednesday 18 March 2026 04:47:32 +0000 (0:00:00.153) 0:04:04.175 ******* 2026-03-18 04:47:36.591258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:47:36.591269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:47:36.591280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:47:36.591290 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591301 | orchestrator | 2026-03-18 04:47:36.591327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:47:36.591339 | orchestrator | Wednesday 18 March 2026 04:47:33 +0000 (0:00:00.754) 0:04:04.930 ******* 2026-03-18 04:47:36.591350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:47:36.591361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:47:36.591372 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:47:36.591383 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591393 | orchestrator | 2026-03-18 04:47:36.591404 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:47:36.591415 | orchestrator | Wednesday 18 March 2026 04:47:34 +0000 (0:00:01.047) 0:04:05.977 ******* 2026-03-18 04:47:36.591425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:47:36.591436 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:47:36.591447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:47:36.591457 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591468 | orchestrator | 2026-03-18 04:47:36.591478 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:47:36.591489 | orchestrator | Wednesday 18 March 2026 04:47:34 +0000 (0:00:00.467) 0:04:06.445 ******* 2026-03-18 04:47:36.591500 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591510 | orchestrator | 2026-03-18 04:47:36.591521 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:47:36.591532 | orchestrator | Wednesday 18 March 2026 04:47:34 +0000 (0:00:00.154) 0:04:06.599 ******* 2026-03-18 04:47:36.591543 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-18 04:47:36.591554 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:47:36.591564 | orchestrator | 2026-03-18 04:47:36.591575 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:47:36.591586 | orchestrator | Wednesday 18 March 2026 04:47:35 +0000 (0:00:00.733) 0:04:07.333 ******* 2026-03-18 04:47:36.591596 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:47:36.591607 | orchestrator | 2026-03-18 04:47:36.591618 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:47:36.591635 | orchestrator | Wednesday 18 March 2026 04:47:36 +0000 (0:00:00.859) 0:04:08.193 ******* 2026-03-18 04:48:11.829423 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.829549 | orchestrator | 2026-03-18 04:48:11.829607 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-18 04:48:11.829628 | orchestrator | Wednesday 18 March 2026 04:47:36 +0000 (0:00:00.164) 0:04:08.357 ******* 2026-03-18 04:48:11.829645 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-18 04:48:11.829664 | orchestrator | 2026-03-18 04:48:11.829675 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-18 04:48:11.829685 | orchestrator | Wednesday 18 March 2026 04:47:37 +0000 (0:00:00.641) 0:04:08.998 ******* 2026-03-18 04:48:11.829695 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-18 04:48:11.829705 | orchestrator | 2026-03-18 04:48:11.829715 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-18 04:48:11.829724 | orchestrator | Wednesday 18 March 2026 04:47:39 +0000 (0:00:02.131) 0:04:11.130 ******* 2026-03-18 04:48:11.829734 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:11.829744 | orchestrator | 2026-03-18 04:48:11.829754 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-18 04:48:11.829766 | orchestrator | Wednesday 18 March 2026 04:47:39 +0000 (0:00:00.170) 0:04:11.301 ******* 2026-03-18 04:48:11.829783 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.829798 | orchestrator | 2026-03-18 04:48:11.829813 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-18 04:48:11.829828 | orchestrator | Wednesday 18 March 2026 04:47:39 +0000 (0:00:00.155) 0:04:11.457 ******* 2026-03-18 04:48:11.829844 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.829859 | orchestrator | 2026-03-18 04:48:11.829892 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-18 04:48:11.829908 | orchestrator | Wednesday 18 March 2026 04:47:40 +0000 (0:00:00.453) 0:04:11.910 ******* 2026-03-18 04:48:11.829924 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:48:11.829941 | orchestrator | 2026-03-18 04:48:11.829958 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-18 04:48:11.829975 | orchestrator | Wednesday 18 March 2026 04:47:41 +0000 (0:00:01.136) 0:04:13.046 ******* 2026-03-18 04:48:11.829991 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830009 | orchestrator | 2026-03-18 04:48:11.830094 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-18 04:48:11.830106 | orchestrator | Wednesday 18 March 2026 04:47:42 +0000 (0:00:00.622) 0:04:13.668 ******* 2026-03-18 04:48:11.830117 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830129 | orchestrator | 2026-03-18 04:48:11.830139 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-18 04:48:11.830150 | orchestrator | Wednesday 18 March 2026 04:47:42 +0000 (0:00:00.514) 0:04:14.183 ******* 2026-03-18 04:48:11.830160 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830171 | orchestrator | 2026-03-18 04:48:11.830182 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-18 04:48:11.830193 | orchestrator | Wednesday 18 March 2026 04:47:43 +0000 (0:00:00.517) 0:04:14.700 ******* 2026-03-18 04:48:11.830203 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830214 | orchestrator | 2026-03-18 04:48:11.830225 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-18 04:48:11.830236 | orchestrator | Wednesday 18 March 2026 04:47:43 +0000 (0:00:00.804) 0:04:15.504 ******* 2026-03-18 04:48:11.830246 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830257 | orchestrator | 2026-03-18 04:48:11.830268 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-18 04:48:11.830279 | orchestrator | Wednesday 18 March 2026 04:47:44 +0000 (0:00:00.723) 0:04:16.228 ******* 2026-03-18 04:48:11.830290 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 04:48:11.830302 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 04:48:11.830311 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 04:48:11.830321 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-18 04:48:11.830366 | orchestrator | 2026-03-18 04:48:11.830380 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-18 04:48:11.830390 | orchestrator | Wednesday 18 March 2026 04:47:47 +0000 (0:00:02.855) 0:04:19.083 ******* 2026-03-18 04:48:11.830400 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:48:11.830409 | orchestrator | 2026-03-18 04:48:11.830419 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-18 04:48:11.830429 | orchestrator | Wednesday 18 March 2026 04:47:48 +0000 (0:00:01.087) 0:04:20.171 ******* 2026-03-18 04:48:11.830439 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830449 | orchestrator | 2026-03-18 04:48:11.830459 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-18 04:48:11.830468 | orchestrator | Wednesday 18 March 2026 04:47:48 +0000 (0:00:00.149) 0:04:20.320 ******* 2026-03-18 04:48:11.830478 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830487 | orchestrator | 2026-03-18 04:48:11.830497 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-18 04:48:11.830507 | orchestrator | Wednesday 18 March 2026 04:47:48 +0000 (0:00:00.142) 0:04:20.463 ******* 2026-03-18 04:48:11.830516 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830527 | orchestrator | 2026-03-18 04:48:11.830543 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-18 04:48:11.830556 | orchestrator | Wednesday 18 March 2026 04:47:49 +0000 (0:00:01.067) 0:04:21.530 ******* 2026-03-18 04:48:11.830565 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830575 | orchestrator | 2026-03-18 04:48:11.830584 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-18 04:48:11.830594 | orchestrator | Wednesday 18 March 2026 04:47:50 +0000 (0:00:00.488) 0:04:22.019 ******* 2026-03-18 04:48:11.830604 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:11.830613 | orchestrator | 2026-03-18 04:48:11.830623 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-18 04:48:11.830632 | orchestrator | Wednesday 18 March 2026 04:47:50 +0000 (0:00:00.429) 0:04:22.449 ******* 2026-03-18 04:48:11.830663 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-18 04:48:11.830674 | orchestrator | 2026-03-18 04:48:11.830683 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-18 04:48:11.830693 | orchestrator | Wednesday 18 March 2026 04:47:51 +0000 (0:00:00.670) 0:04:23.119 ******* 2026-03-18 04:48:11.830703 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:11.830712 | orchestrator | 2026-03-18 04:48:11.830722 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-18 04:48:11.830731 | orchestrator | Wednesday 18 March 2026 04:47:51 +0000 (0:00:00.145) 0:04:23.265 ******* 2026-03-18 04:48:11.830741 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:11.830750 | orchestrator | 2026-03-18 04:48:11.830760 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-18 04:48:11.830770 | orchestrator | Wednesday 18 March 2026 04:47:51 +0000 (0:00:00.145) 0:04:23.411 ******* 2026-03-18 04:48:11.830779 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-18 04:48:11.830789 | orchestrator | 2026-03-18 04:48:11.830798 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-18 04:48:11.830808 | orchestrator | Wednesday 18 March 2026 04:47:52 +0000 (0:00:00.608) 0:04:24.019 ******* 2026-03-18 04:48:11.830817 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830827 | orchestrator | 2026-03-18 04:48:11.830838 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-18 04:48:11.830855 | orchestrator | Wednesday 18 March 2026 04:47:53 +0000 (0:00:01.319) 0:04:25.338 ******* 2026-03-18 04:48:11.830870 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830885 | orchestrator | 2026-03-18 04:48:11.830908 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-18 04:48:11.830923 | orchestrator | Wednesday 18 March 2026 04:47:54 +0000 (0:00:01.072) 0:04:26.411 ******* 2026-03-18 04:48:11.830951 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.830968 | orchestrator | 2026-03-18 04:48:11.830985 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-18 04:48:11.830997 | orchestrator | Wednesday 18 March 2026 04:47:56 +0000 (0:00:01.503) 0:04:27.915 ******* 2026-03-18 04:48:11.831007 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:48:11.831016 | orchestrator | 2026-03-18 04:48:11.831025 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-18 04:48:11.831035 | orchestrator | Wednesday 18 March 2026 04:47:58 +0000 (0:00:02.315) 0:04:30.230 ******* 2026-03-18 04:48:11.831045 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-18 04:48:11.831054 | orchestrator | 2026-03-18 04:48:11.831064 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-18 04:48:11.831073 | orchestrator | Wednesday 18 March 2026 04:47:59 +0000 (0:00:00.650) 0:04:30.881 ******* 2026-03-18 04:48:11.831083 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.831092 | orchestrator | 2026-03-18 04:48:11.831102 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-18 04:48:11.831111 | orchestrator | Wednesday 18 March 2026 04:48:00 +0000 (0:00:01.528) 0:04:32.410 ******* 2026-03-18 04:48:11.831120 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:11.831130 | orchestrator | 2026-03-18 04:48:11.831139 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-18 04:48:11.831149 | orchestrator | Wednesday 18 March 2026 04:48:02 +0000 (0:00:02.027) 0:04:34.437 ******* 2026-03-18 04:48:11.831158 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:11.831168 | orchestrator | 2026-03-18 04:48:11.831177 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-18 04:48:11.831187 | orchestrator | Wednesday 18 March 2026 04:48:02 +0000 (0:00:00.135) 0:04:34.573 ******* 2026-03-18 04:48:11.831198 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:48:11.831211 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:48:11.831221 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-18 04:48:11.831231 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-18 04:48:11.831253 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-18 04:48:25.224164 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}])  2026-03-18 04:48:25.224293 | orchestrator | 2026-03-18 04:48:25.224308 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-18 04:48:25.224318 | orchestrator | Wednesday 18 March 2026 04:48:11 +0000 (0:00:08.859) 0:04:43.433 ******* 2026-03-18 04:48:25.224327 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:48:25.224337 | orchestrator | 2026-03-18 04:48:25.224346 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:48:25.224404 | orchestrator | Wednesday 18 March 2026 04:48:13 +0000 (0:00:01.521) 0:04:44.954 ******* 2026-03-18 04:48:25.224414 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:48:25.224437 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 04:48:25.224447 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 04:48:25.224456 | orchestrator | 2026-03-18 04:48:25.224465 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:48:25.224473 | orchestrator | Wednesday 18 March 2026 04:48:14 +0000 (0:00:01.256) 0:04:46.210 ******* 2026-03-18 04:48:25.224482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:48:25.224491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:48:25.224500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:48:25.224508 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:25.224517 | orchestrator | 2026-03-18 04:48:25.224526 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-18 04:48:25.224534 | orchestrator | Wednesday 18 March 2026 04:48:15 +0000 (0:00:00.506) 0:04:46.717 ******* 2026-03-18 04:48:25.224543 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:48:25.224551 | orchestrator | 2026-03-18 04:48:25.224560 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-18 04:48:25.224570 | orchestrator | Wednesday 18 March 2026 04:48:15 +0000 (0:00:00.160) 0:04:46.878 ******* 2026-03-18 04:48:25.224579 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:48:25.224588 | orchestrator | 2026-03-18 04:48:25.224597 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-18 04:48:25.224605 | orchestrator | 2026-03-18 04:48:25.224614 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-18 04:48:25.224622 | orchestrator | Wednesday 18 March 2026 04:48:16 +0000 (0:00:01.711) 0:04:48.589 ******* 2026-03-18 04:48:25.224631 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224640 | orchestrator | 2026-03-18 04:48:25.224648 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-18 04:48:25.224657 | orchestrator | Wednesday 18 March 2026 04:48:17 +0000 (0:00:00.502) 0:04:49.092 ******* 2026-03-18 04:48:25.224665 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224674 | orchestrator | 2026-03-18 04:48:25.224683 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-18 04:48:25.224692 | orchestrator | Wednesday 18 March 2026 04:48:17 +0000 (0:00:00.438) 0:04:49.530 ******* 2026-03-18 04:48:25.224700 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:25.224711 | orchestrator | 2026-03-18 04:48:25.224720 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-18 04:48:25.224730 | orchestrator | Wednesday 18 March 2026 04:48:18 +0000 (0:00:00.133) 0:04:49.664 ******* 2026-03-18 04:48:25.224740 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224749 | orchestrator | 2026-03-18 04:48:25.224759 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:48:25.224769 | orchestrator | Wednesday 18 March 2026 04:48:18 +0000 (0:00:00.136) 0:04:49.801 ******* 2026-03-18 04:48:25.224778 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-18 04:48:25.224795 | orchestrator | 2026-03-18 04:48:25.224805 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:48:25.224815 | orchestrator | Wednesday 18 March 2026 04:48:18 +0000 (0:00:00.260) 0:04:50.061 ******* 2026-03-18 04:48:25.224825 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224834 | orchestrator | 2026-03-18 04:48:25.224844 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:48:25.224854 | orchestrator | Wednesday 18 March 2026 04:48:18 +0000 (0:00:00.456) 0:04:50.517 ******* 2026-03-18 04:48:25.224863 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224872 | orchestrator | 2026-03-18 04:48:25.224882 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:48:25.224892 | orchestrator | Wednesday 18 March 2026 04:48:19 +0000 (0:00:00.140) 0:04:50.658 ******* 2026-03-18 04:48:25.224901 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224911 | orchestrator | 2026-03-18 04:48:25.224921 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:48:25.224931 | orchestrator | Wednesday 18 March 2026 04:48:19 +0000 (0:00:00.501) 0:04:51.159 ******* 2026-03-18 04:48:25.224941 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224950 | orchestrator | 2026-03-18 04:48:25.224960 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:48:25.224970 | orchestrator | Wednesday 18 March 2026 04:48:19 +0000 (0:00:00.184) 0:04:51.345 ******* 2026-03-18 04:48:25.224980 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.224989 | orchestrator | 2026-03-18 04:48:25.225000 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:48:25.225010 | orchestrator | Wednesday 18 March 2026 04:48:19 +0000 (0:00:00.146) 0:04:51.491 ******* 2026-03-18 04:48:25.225019 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.225029 | orchestrator | 2026-03-18 04:48:25.225054 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:48:25.225065 | orchestrator | Wednesday 18 March 2026 04:48:20 +0000 (0:00:00.163) 0:04:51.654 ******* 2026-03-18 04:48:25.225074 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:25.225082 | orchestrator | 2026-03-18 04:48:25.225091 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:48:25.225100 | orchestrator | Wednesday 18 March 2026 04:48:20 +0000 (0:00:00.140) 0:04:51.795 ******* 2026-03-18 04:48:25.225108 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.225117 | orchestrator | 2026-03-18 04:48:25.225125 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:48:25.225134 | orchestrator | Wednesday 18 March 2026 04:48:20 +0000 (0:00:00.168) 0:04:51.964 ******* 2026-03-18 04:48:25.225143 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:48:25.225152 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:25.225160 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:48:25.225169 | orchestrator | 2026-03-18 04:48:25.225178 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:48:25.225190 | orchestrator | Wednesday 18 March 2026 04:48:21 +0000 (0:00:01.400) 0:04:53.364 ******* 2026-03-18 04:48:25.225199 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:25.225208 | orchestrator | 2026-03-18 04:48:25.225217 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:48:25.225225 | orchestrator | Wednesday 18 March 2026 04:48:22 +0000 (0:00:00.278) 0:04:53.643 ******* 2026-03-18 04:48:25.225234 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:48:25.225242 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:25.225251 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:48:25.225260 | orchestrator | 2026-03-18 04:48:25.225268 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:48:25.225283 | orchestrator | Wednesday 18 March 2026 04:48:23 +0000 (0:00:01.820) 0:04:55.464 ******* 2026-03-18 04:48:25.225291 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:48:25.225300 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:48:25.225309 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:48:25.225317 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:25.225326 | orchestrator | 2026-03-18 04:48:25.225335 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:48:25.225344 | orchestrator | Wednesday 18 March 2026 04:48:24 +0000 (0:00:00.443) 0:04:55.908 ******* 2026-03-18 04:48:25.225387 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:48:25.225400 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:48:25.225409 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:48:25.225418 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:25.225427 | orchestrator | 2026-03-18 04:48:25.225436 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:48:25.225444 | orchestrator | Wednesday 18 March 2026 04:48:24 +0000 (0:00:00.683) 0:04:56.591 ******* 2026-03-18 04:48:25.225455 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:25.225466 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:25.225482 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.279335 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.279493 | orchestrator | 2026-03-18 04:48:29.279512 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:48:29.279526 | orchestrator | Wednesday 18 March 2026 04:48:25 +0000 (0:00:00.237) 0:04:56.829 ******* 2026-03-18 04:48:29.279558 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:48:22.542174', 'end': '2026-03-18 04:48:22.596876', 'delta': '0:00:00.054702', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:48:29.279599 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1edfdf2d0145', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:48:23.122646', 'end': '2026-03-18 04:48:23.168270', 'delta': '0:00:00.045624', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1edfdf2d0145'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:48:29.279611 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'fc8e238828f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:48:23.666900', 'end': '2026-03-18 04:48:23.705315', 'delta': '0:00:00.038415', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc8e238828f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:48:29.279623 | orchestrator | 2026-03-18 04:48:29.279634 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:48:29.279646 | orchestrator | Wednesday 18 March 2026 04:48:25 +0000 (0:00:00.244) 0:04:57.074 ******* 2026-03-18 04:48:29.279657 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:29.279669 | orchestrator | 2026-03-18 04:48:29.279680 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:48:29.279690 | orchestrator | Wednesday 18 March 2026 04:48:25 +0000 (0:00:00.283) 0:04:57.357 ******* 2026-03-18 04:48:29.279701 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.279712 | orchestrator | 2026-03-18 04:48:29.279723 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:48:29.279734 | orchestrator | Wednesday 18 March 2026 04:48:26 +0000 (0:00:00.271) 0:04:57.629 ******* 2026-03-18 04:48:29.279745 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:29.279756 | orchestrator | 2026-03-18 04:48:29.279766 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:48:29.279777 | orchestrator | Wednesday 18 March 2026 04:48:26 +0000 (0:00:00.153) 0:04:57.783 ******* 2026-03-18 04:48:29.279788 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:48:29.279799 | orchestrator | 2026-03-18 04:48:29.279809 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:48:29.279820 | orchestrator | Wednesday 18 March 2026 04:48:27 +0000 (0:00:00.960) 0:04:58.743 ******* 2026-03-18 04:48:29.279831 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:29.279842 | orchestrator | 2026-03-18 04:48:29.279852 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:48:29.279863 | orchestrator | Wednesday 18 March 2026 04:48:27 +0000 (0:00:00.186) 0:04:58.930 ******* 2026-03-18 04:48:29.279876 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.279889 | orchestrator | 2026-03-18 04:48:29.279901 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:48:29.279914 | orchestrator | Wednesday 18 March 2026 04:48:27 +0000 (0:00:00.146) 0:04:59.076 ******* 2026-03-18 04:48:29.279927 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.279947 | orchestrator | 2026-03-18 04:48:29.279959 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:48:29.279972 | orchestrator | Wednesday 18 March 2026 04:48:27 +0000 (0:00:00.221) 0:04:59.298 ******* 2026-03-18 04:48:29.279985 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.279997 | orchestrator | 2026-03-18 04:48:29.280026 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:48:29.280040 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.420) 0:04:59.719 ******* 2026-03-18 04:48:29.280052 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280064 | orchestrator | 2026-03-18 04:48:29.280077 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:48:29.280088 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.141) 0:04:59.860 ******* 2026-03-18 04:48:29.280101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280113 | orchestrator | 2026-03-18 04:48:29.280125 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:48:29.280138 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.148) 0:05:00.009 ******* 2026-03-18 04:48:29.280151 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280163 | orchestrator | 2026-03-18 04:48:29.280175 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:48:29.280187 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.176) 0:05:00.185 ******* 2026-03-18 04:48:29.280199 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280212 | orchestrator | 2026-03-18 04:48:29.280230 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:48:29.280241 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.136) 0:05:00.322 ******* 2026-03-18 04:48:29.280252 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280263 | orchestrator | 2026-03-18 04:48:29.280273 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:48:29.280285 | orchestrator | Wednesday 18 March 2026 04:48:28 +0000 (0:00:00.134) 0:05:00.457 ******* 2026-03-18 04:48:29.280295 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.280307 | orchestrator | 2026-03-18 04:48:29.280317 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:48:29.280328 | orchestrator | Wednesday 18 March 2026 04:48:29 +0000 (0:00:00.164) 0:05:00.622 ******* 2026-03-18 04:48:29.280340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.280354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.280385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.280397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:48:29.280417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.280429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.280448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.548143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:48:29.548270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.548310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:48:29.548323 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:29.548336 | orchestrator | 2026-03-18 04:48:29.548349 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:48:29.548431 | orchestrator | Wednesday 18 March 2026 04:48:29 +0000 (0:00:00.261) 0:05:00.883 ******* 2026-03-18 04:48:29.548447 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548476 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548495 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548508 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548520 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548548 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548560 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:29.548588 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:43.676052 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:43.676201 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:48:43.676221 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676235 | orchestrator | 2026-03-18 04:48:43.676248 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:48:43.676260 | orchestrator | Wednesday 18 March 2026 04:48:29 +0000 (0:00:00.275) 0:05:01.158 ******* 2026-03-18 04:48:43.676271 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:43.676282 | orchestrator | 2026-03-18 04:48:43.676294 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:48:43.676304 | orchestrator | Wednesday 18 March 2026 04:48:30 +0000 (0:00:00.498) 0:05:01.656 ******* 2026-03-18 04:48:43.676315 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:43.676326 | orchestrator | 2026-03-18 04:48:43.676336 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:48:43.676347 | orchestrator | Wednesday 18 March 2026 04:48:30 +0000 (0:00:00.136) 0:05:01.793 ******* 2026-03-18 04:48:43.676358 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:43.676407 | orchestrator | 2026-03-18 04:48:43.676421 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:48:43.676431 | orchestrator | Wednesday 18 March 2026 04:48:30 +0000 (0:00:00.500) 0:05:02.294 ******* 2026-03-18 04:48:43.676442 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676453 | orchestrator | 2026-03-18 04:48:43.676463 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:48:43.676474 | orchestrator | Wednesday 18 March 2026 04:48:30 +0000 (0:00:00.126) 0:05:02.420 ******* 2026-03-18 04:48:43.676485 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676495 | orchestrator | 2026-03-18 04:48:43.676506 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:48:43.676517 | orchestrator | Wednesday 18 March 2026 04:48:31 +0000 (0:00:00.883) 0:05:03.303 ******* 2026-03-18 04:48:43.676529 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676550 | orchestrator | 2026-03-18 04:48:43.676569 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:48:43.676589 | orchestrator | Wednesday 18 March 2026 04:48:31 +0000 (0:00:00.213) 0:05:03.516 ******* 2026-03-18 04:48:43.676609 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-18 04:48:43.676628 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:43.676646 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-18 04:48:43.676666 | orchestrator | 2026-03-18 04:48:43.676685 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:48:43.676704 | orchestrator | Wednesday 18 March 2026 04:48:32 +0000 (0:00:00.693) 0:05:04.210 ******* 2026-03-18 04:48:43.676724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:48:43.676744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:48:43.676765 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:48:43.676799 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676812 | orchestrator | 2026-03-18 04:48:43.676823 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:48:43.676833 | orchestrator | Wednesday 18 March 2026 04:48:32 +0000 (0:00:00.187) 0:05:04.398 ******* 2026-03-18 04:48:43.676844 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.676854 | orchestrator | 2026-03-18 04:48:43.676865 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:48:43.676876 | orchestrator | Wednesday 18 March 2026 04:48:32 +0000 (0:00:00.147) 0:05:04.546 ******* 2026-03-18 04:48:43.676886 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:48:43.676898 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:43.676909 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:48:43.676920 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:48:43.676931 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:48:43.676941 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:48:43.676972 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:48:43.676983 | orchestrator | 2026-03-18 04:48:43.676994 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:48:43.677027 | orchestrator | Wednesday 18 March 2026 04:48:33 +0000 (0:00:00.950) 0:05:05.496 ******* 2026-03-18 04:48:43.677039 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:48:43.677049 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:43.677061 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:48:43.677082 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:48:43.677093 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:48:43.677104 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:48:43.677157 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:48:43.677169 | orchestrator | 2026-03-18 04:48:43.677180 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-18 04:48:43.677191 | orchestrator | Wednesday 18 March 2026 04:48:35 +0000 (0:00:01.664) 0:05:07.161 ******* 2026-03-18 04:48:43.677202 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677212 | orchestrator | 2026-03-18 04:48:43.677223 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-18 04:48:43.677234 | orchestrator | Wednesday 18 March 2026 04:48:35 +0000 (0:00:00.233) 0:05:07.394 ******* 2026-03-18 04:48:43.677244 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677255 | orchestrator | 2026-03-18 04:48:43.677266 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-18 04:48:43.677276 | orchestrator | Wednesday 18 March 2026 04:48:36 +0000 (0:00:00.225) 0:05:07.620 ******* 2026-03-18 04:48:43.677287 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677298 | orchestrator | 2026-03-18 04:48:43.677308 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-18 04:48:43.677319 | orchestrator | Wednesday 18 March 2026 04:48:36 +0000 (0:00:00.140) 0:05:07.760 ******* 2026-03-18 04:48:43.677330 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677340 | orchestrator | 2026-03-18 04:48:43.677351 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-18 04:48:43.677362 | orchestrator | Wednesday 18 March 2026 04:48:36 +0000 (0:00:00.250) 0:05:08.010 ******* 2026-03-18 04:48:43.677413 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677435 | orchestrator | 2026-03-18 04:48:43.677446 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-18 04:48:43.677457 | orchestrator | Wednesday 18 March 2026 04:48:36 +0000 (0:00:00.151) 0:05:08.162 ******* 2026-03-18 04:48:43.677467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:48:43.677478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:48:43.677489 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:48:43.677500 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677510 | orchestrator | 2026-03-18 04:48:43.677521 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-18 04:48:43.677532 | orchestrator | Wednesday 18 March 2026 04:48:37 +0000 (0:00:01.014) 0:05:09.177 ******* 2026-03-18 04:48:43.677542 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-18 04:48:43.677553 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-18 04:48:43.677564 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-18 04:48:43.677574 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-18 04:48:43.677591 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-18 04:48:43.677602 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-18 04:48:43.677612 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:43.677623 | orchestrator | 2026-03-18 04:48:43.677634 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-18 04:48:43.677644 | orchestrator | Wednesday 18 March 2026 04:48:38 +0000 (0:00:00.666) 0:05:09.843 ******* 2026-03-18 04:48:43.677655 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:43.677666 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:48:43.677681 | orchestrator | 2026-03-18 04:48:43.677699 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-18 04:48:43.677710 | orchestrator | Wednesday 18 March 2026 04:48:41 +0000 (0:00:03.466) 0:05:13.310 ******* 2026-03-18 04:48:43.677721 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:48:43.677732 | orchestrator | 2026-03-18 04:48:43.677743 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:48:43.677753 | orchestrator | Wednesday 18 March 2026 04:48:43 +0000 (0:00:01.511) 0:05:14.821 ******* 2026-03-18 04:48:43.677764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-18 04:48:43.677778 | orchestrator | 2026-03-18 04:48:43.677796 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:48:43.677807 | orchestrator | Wednesday 18 March 2026 04:48:43 +0000 (0:00:00.226) 0:05:15.047 ******* 2026-03-18 04:48:43.677818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-18 04:48:43.677828 | orchestrator | 2026-03-18 04:48:43.677839 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:48:43.677859 | orchestrator | Wednesday 18 March 2026 04:48:43 +0000 (0:00:00.230) 0:05:15.277 ******* 2026-03-18 04:48:55.575717 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.575856 | orchestrator | 2026-03-18 04:48:55.575885 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:48:55.575908 | orchestrator | Wednesday 18 March 2026 04:48:44 +0000 (0:00:00.533) 0:05:15.811 ******* 2026-03-18 04:48:55.575928 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.575944 | orchestrator | 2026-03-18 04:48:55.575955 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:48:55.575966 | orchestrator | Wednesday 18 March 2026 04:48:44 +0000 (0:00:00.136) 0:05:15.947 ******* 2026-03-18 04:48:55.575977 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.575988 | orchestrator | 2026-03-18 04:48:55.576027 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:48:55.576039 | orchestrator | Wednesday 18 March 2026 04:48:44 +0000 (0:00:00.128) 0:05:16.075 ******* 2026-03-18 04:48:55.576050 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576061 | orchestrator | 2026-03-18 04:48:55.576072 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:48:55.576082 | orchestrator | Wednesday 18 March 2026 04:48:44 +0000 (0:00:00.130) 0:05:16.206 ******* 2026-03-18 04:48:55.576093 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576104 | orchestrator | 2026-03-18 04:48:55.576114 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:48:55.576125 | orchestrator | Wednesday 18 March 2026 04:48:45 +0000 (0:00:00.607) 0:05:16.813 ******* 2026-03-18 04:48:55.576136 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576147 | orchestrator | 2026-03-18 04:48:55.576158 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:48:55.576170 | orchestrator | Wednesday 18 March 2026 04:48:45 +0000 (0:00:00.416) 0:05:17.229 ******* 2026-03-18 04:48:55.576180 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576191 | orchestrator | 2026-03-18 04:48:55.576202 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:48:55.576213 | orchestrator | Wednesday 18 March 2026 04:48:45 +0000 (0:00:00.146) 0:05:17.376 ******* 2026-03-18 04:48:55.576223 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576234 | orchestrator | 2026-03-18 04:48:55.576246 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:48:55.576259 | orchestrator | Wednesday 18 March 2026 04:48:46 +0000 (0:00:00.575) 0:05:17.951 ******* 2026-03-18 04:48:55.576271 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576283 | orchestrator | 2026-03-18 04:48:55.576296 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:48:55.576309 | orchestrator | Wednesday 18 March 2026 04:48:46 +0000 (0:00:00.538) 0:05:18.491 ******* 2026-03-18 04:48:55.576321 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576333 | orchestrator | 2026-03-18 04:48:55.576346 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:48:55.576358 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.149) 0:05:18.640 ******* 2026-03-18 04:48:55.576371 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576415 | orchestrator | 2026-03-18 04:48:55.576428 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:48:55.576440 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.160) 0:05:18.801 ******* 2026-03-18 04:48:55.576453 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576465 | orchestrator | 2026-03-18 04:48:55.576477 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:48:55.576489 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.138) 0:05:18.940 ******* 2026-03-18 04:48:55.576501 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576513 | orchestrator | 2026-03-18 04:48:55.576525 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:48:55.576537 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.138) 0:05:19.078 ******* 2026-03-18 04:48:55.576550 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576562 | orchestrator | 2026-03-18 04:48:55.576573 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:48:55.576599 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.157) 0:05:19.236 ******* 2026-03-18 04:48:55.576610 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576621 | orchestrator | 2026-03-18 04:48:55.576632 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:48:55.576643 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.154) 0:05:19.390 ******* 2026-03-18 04:48:55.576653 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576664 | orchestrator | 2026-03-18 04:48:55.576683 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:48:55.576694 | orchestrator | Wednesday 18 March 2026 04:48:47 +0000 (0:00:00.137) 0:05:19.527 ******* 2026-03-18 04:48:55.576705 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576716 | orchestrator | 2026-03-18 04:48:55.576726 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:48:55.576737 | orchestrator | Wednesday 18 March 2026 04:48:48 +0000 (0:00:00.166) 0:05:19.694 ******* 2026-03-18 04:48:55.576748 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576759 | orchestrator | 2026-03-18 04:48:55.576770 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:48:55.576780 | orchestrator | Wednesday 18 March 2026 04:48:48 +0000 (0:00:00.152) 0:05:19.846 ******* 2026-03-18 04:48:55.576791 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.576802 | orchestrator | 2026-03-18 04:48:55.576813 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:48:55.576824 | orchestrator | Wednesday 18 March 2026 04:48:48 +0000 (0:00:00.504) 0:05:20.350 ******* 2026-03-18 04:48:55.576835 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576846 | orchestrator | 2026-03-18 04:48:55.576856 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:48:55.576867 | orchestrator | Wednesday 18 March 2026 04:48:48 +0000 (0:00:00.155) 0:05:20.505 ******* 2026-03-18 04:48:55.576878 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576893 | orchestrator | 2026-03-18 04:48:55.576913 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:48:55.576953 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.124) 0:05:20.630 ******* 2026-03-18 04:48:55.576971 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.576989 | orchestrator | 2026-03-18 04:48:55.577005 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:48:55.577023 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.151) 0:05:20.782 ******* 2026-03-18 04:48:55.577041 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577058 | orchestrator | 2026-03-18 04:48:55.577078 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:48:55.577094 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.143) 0:05:20.926 ******* 2026-03-18 04:48:55.577104 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577115 | orchestrator | 2026-03-18 04:48:55.577126 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:48:55.577137 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.155) 0:05:21.081 ******* 2026-03-18 04:48:55.577147 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577158 | orchestrator | 2026-03-18 04:48:55.577169 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:48:55.577180 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.141) 0:05:21.223 ******* 2026-03-18 04:48:55.577190 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577201 | orchestrator | 2026-03-18 04:48:55.577212 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:48:55.577224 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.136) 0:05:21.359 ******* 2026-03-18 04:48:55.577234 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577245 | orchestrator | 2026-03-18 04:48:55.577256 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:48:55.577267 | orchestrator | Wednesday 18 March 2026 04:48:49 +0000 (0:00:00.138) 0:05:21.497 ******* 2026-03-18 04:48:55.577278 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577289 | orchestrator | 2026-03-18 04:48:55.577300 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:48:55.577310 | orchestrator | Wednesday 18 March 2026 04:48:50 +0000 (0:00:00.137) 0:05:21.635 ******* 2026-03-18 04:48:55.577321 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577341 | orchestrator | 2026-03-18 04:48:55.577352 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:48:55.577363 | orchestrator | Wednesday 18 March 2026 04:48:50 +0000 (0:00:00.124) 0:05:21.760 ******* 2026-03-18 04:48:55.577373 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577404 | orchestrator | 2026-03-18 04:48:55.577415 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:48:55.577426 | orchestrator | Wednesday 18 March 2026 04:48:50 +0000 (0:00:00.141) 0:05:21.901 ******* 2026-03-18 04:48:55.577437 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577447 | orchestrator | 2026-03-18 04:48:55.577458 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:48:55.577469 | orchestrator | Wednesday 18 March 2026 04:48:50 +0000 (0:00:00.489) 0:05:22.391 ******* 2026-03-18 04:48:55.577479 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.577490 | orchestrator | 2026-03-18 04:48:55.577501 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:48:55.577512 | orchestrator | Wednesday 18 March 2026 04:48:51 +0000 (0:00:00.997) 0:05:23.389 ******* 2026-03-18 04:48:55.577523 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.577533 | orchestrator | 2026-03-18 04:48:55.577544 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:48:55.577555 | orchestrator | Wednesday 18 March 2026 04:48:53 +0000 (0:00:01.463) 0:05:24.852 ******* 2026-03-18 04:48:55.577566 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-18 04:48:55.577578 | orchestrator | 2026-03-18 04:48:55.577589 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:48:55.577606 | orchestrator | Wednesday 18 March 2026 04:48:53 +0000 (0:00:00.214) 0:05:25.067 ******* 2026-03-18 04:48:55.577617 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577628 | orchestrator | 2026-03-18 04:48:55.577639 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:48:55.577650 | orchestrator | Wednesday 18 March 2026 04:48:53 +0000 (0:00:00.124) 0:05:25.192 ******* 2026-03-18 04:48:55.577660 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577671 | orchestrator | 2026-03-18 04:48:55.577682 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:48:55.577692 | orchestrator | Wednesday 18 March 2026 04:48:53 +0000 (0:00:00.143) 0:05:25.336 ******* 2026-03-18 04:48:55.577703 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:48:55.577714 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:48:55.577725 | orchestrator | 2026-03-18 04:48:55.577735 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:48:55.577746 | orchestrator | Wednesday 18 March 2026 04:48:54 +0000 (0:00:00.819) 0:05:26.155 ******* 2026-03-18 04:48:55.577757 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:48:55.577768 | orchestrator | 2026-03-18 04:48:55.577778 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:48:55.577789 | orchestrator | Wednesday 18 March 2026 04:48:55 +0000 (0:00:00.569) 0:05:26.724 ******* 2026-03-18 04:48:55.577800 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577810 | orchestrator | 2026-03-18 04:48:55.577821 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:48:55.577832 | orchestrator | Wednesday 18 March 2026 04:48:55 +0000 (0:00:00.151) 0:05:26.876 ******* 2026-03-18 04:48:55.577843 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:48:55.577853 | orchestrator | 2026-03-18 04:48:55.577864 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:48:55.577875 | orchestrator | Wednesday 18 March 2026 04:48:55 +0000 (0:00:00.142) 0:05:27.018 ******* 2026-03-18 04:48:55.577894 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248204 | orchestrator | 2026-03-18 04:49:09.248289 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:49:09.248320 | orchestrator | Wednesday 18 March 2026 04:48:55 +0000 (0:00:00.163) 0:05:27.181 ******* 2026-03-18 04:49:09.248328 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-18 04:49:09.248336 | orchestrator | 2026-03-18 04:49:09.248343 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:49:09.248348 | orchestrator | Wednesday 18 March 2026 04:48:56 +0000 (0:00:00.510) 0:05:27.692 ******* 2026-03-18 04:49:09.248356 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:09.248361 | orchestrator | 2026-03-18 04:49:09.248365 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:49:09.248370 | orchestrator | Wednesday 18 March 2026 04:48:56 +0000 (0:00:00.691) 0:05:28.384 ******* 2026-03-18 04:49:09.248374 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:49:09.248378 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:49:09.248382 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:49:09.248386 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248433 | orchestrator | 2026-03-18 04:49:09.248438 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:49:09.248442 | orchestrator | Wednesday 18 March 2026 04:48:56 +0000 (0:00:00.160) 0:05:28.544 ******* 2026-03-18 04:49:09.248446 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248450 | orchestrator | 2026-03-18 04:49:09.248453 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:49:09.248457 | orchestrator | Wednesday 18 March 2026 04:48:57 +0000 (0:00:00.144) 0:05:28.688 ******* 2026-03-18 04:49:09.248461 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248465 | orchestrator | 2026-03-18 04:49:09.248469 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:49:09.248473 | orchestrator | Wednesday 18 March 2026 04:48:57 +0000 (0:00:00.181) 0:05:28.870 ******* 2026-03-18 04:49:09.248476 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248480 | orchestrator | 2026-03-18 04:49:09.248485 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:49:09.248488 | orchestrator | Wednesday 18 March 2026 04:48:57 +0000 (0:00:00.182) 0:05:29.052 ******* 2026-03-18 04:49:09.248492 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248496 | orchestrator | 2026-03-18 04:49:09.248500 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:49:09.248504 | orchestrator | Wednesday 18 March 2026 04:48:57 +0000 (0:00:00.159) 0:05:29.211 ******* 2026-03-18 04:49:09.248507 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248511 | orchestrator | 2026-03-18 04:49:09.248515 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:49:09.248519 | orchestrator | Wednesday 18 March 2026 04:48:57 +0000 (0:00:00.168) 0:05:29.379 ******* 2026-03-18 04:49:09.248523 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:09.248526 | orchestrator | 2026-03-18 04:49:09.248530 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:49:09.248534 | orchestrator | Wednesday 18 March 2026 04:48:59 +0000 (0:00:01.506) 0:05:30.885 ******* 2026-03-18 04:49:09.248538 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:09.248542 | orchestrator | 2026-03-18 04:49:09.248545 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:49:09.248549 | orchestrator | Wednesday 18 March 2026 04:48:59 +0000 (0:00:00.157) 0:05:31.043 ******* 2026-03-18 04:49:09.248553 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-18 04:49:09.248557 | orchestrator | 2026-03-18 04:49:09.248560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:49:09.248575 | orchestrator | Wednesday 18 March 2026 04:48:59 +0000 (0:00:00.223) 0:05:31.267 ******* 2026-03-18 04:49:09.248584 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248588 | orchestrator | 2026-03-18 04:49:09.248591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:49:09.248595 | orchestrator | Wednesday 18 March 2026 04:48:59 +0000 (0:00:00.133) 0:05:31.401 ******* 2026-03-18 04:49:09.248599 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248603 | orchestrator | 2026-03-18 04:49:09.248606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:49:09.248610 | orchestrator | Wednesday 18 March 2026 04:49:00 +0000 (0:00:00.462) 0:05:31.863 ******* 2026-03-18 04:49:09.248614 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248618 | orchestrator | 2026-03-18 04:49:09.248622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:49:09.248625 | orchestrator | Wednesday 18 March 2026 04:49:00 +0000 (0:00:00.158) 0:05:32.022 ******* 2026-03-18 04:49:09.248629 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248633 | orchestrator | 2026-03-18 04:49:09.248637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:49:09.248640 | orchestrator | Wednesday 18 March 2026 04:49:00 +0000 (0:00:00.151) 0:05:32.174 ******* 2026-03-18 04:49:09.248644 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248648 | orchestrator | 2026-03-18 04:49:09.248652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:49:09.248656 | orchestrator | Wednesday 18 March 2026 04:49:00 +0000 (0:00:00.155) 0:05:32.329 ******* 2026-03-18 04:49:09.248659 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248663 | orchestrator | 2026-03-18 04:49:09.248667 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:49:09.248671 | orchestrator | Wednesday 18 March 2026 04:49:00 +0000 (0:00:00.150) 0:05:32.479 ******* 2026-03-18 04:49:09.248675 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248678 | orchestrator | 2026-03-18 04:49:09.248682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:49:09.248697 | orchestrator | Wednesday 18 March 2026 04:49:01 +0000 (0:00:00.142) 0:05:32.621 ******* 2026-03-18 04:49:09.248701 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248704 | orchestrator | 2026-03-18 04:49:09.248708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:49:09.248712 | orchestrator | Wednesday 18 March 2026 04:49:01 +0000 (0:00:00.160) 0:05:32.782 ******* 2026-03-18 04:49:09.248716 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:09.248720 | orchestrator | 2026-03-18 04:49:09.248724 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:49:09.248727 | orchestrator | Wednesday 18 March 2026 04:49:01 +0000 (0:00:00.242) 0:05:33.024 ******* 2026-03-18 04:49:09.248731 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-18 04:49:09.248735 | orchestrator | 2026-03-18 04:49:09.248739 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:49:09.248743 | orchestrator | Wednesday 18 March 2026 04:49:01 +0000 (0:00:00.218) 0:05:33.242 ******* 2026-03-18 04:49:09.248747 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-18 04:49:09.248751 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-18 04:49:09.248755 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-18 04:49:09.248759 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-18 04:49:09.248763 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-18 04:49:09.248768 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-18 04:49:09.248772 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-18 04:49:09.248777 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:49:09.248782 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:49:09.248786 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:49:09.248794 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:49:09.248798 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:49:09.248802 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:49:09.248807 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:49:09.248811 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-18 04:49:09.248815 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-18 04:49:09.248820 | orchestrator | 2026-03-18 04:49:09.248824 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:49:09.248828 | orchestrator | Wednesday 18 March 2026 04:49:07 +0000 (0:00:05.780) 0:05:39.022 ******* 2026-03-18 04:49:09.248833 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248839 | orchestrator | 2026-03-18 04:49:09.248845 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:49:09.248852 | orchestrator | Wednesday 18 March 2026 04:49:07 +0000 (0:00:00.129) 0:05:39.152 ******* 2026-03-18 04:49:09.248858 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248863 | orchestrator | 2026-03-18 04:49:09.248870 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:49:09.248876 | orchestrator | Wednesday 18 March 2026 04:49:07 +0000 (0:00:00.411) 0:05:39.564 ******* 2026-03-18 04:49:09.248882 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248889 | orchestrator | 2026-03-18 04:49:09.248895 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:49:09.248903 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.161) 0:05:39.725 ******* 2026-03-18 04:49:09.248908 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248912 | orchestrator | 2026-03-18 04:49:09.248916 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:49:09.248924 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.140) 0:05:39.866 ******* 2026-03-18 04:49:09.248929 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248933 | orchestrator | 2026-03-18 04:49:09.248937 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:49:09.248942 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.140) 0:05:40.006 ******* 2026-03-18 04:49:09.248946 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248950 | orchestrator | 2026-03-18 04:49:09.248955 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:49:09.248959 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.148) 0:05:40.154 ******* 2026-03-18 04:49:09.248964 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248968 | orchestrator | 2026-03-18 04:49:09.248972 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:49:09.248977 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.140) 0:05:40.295 ******* 2026-03-18 04:49:09.248981 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.248985 | orchestrator | 2026-03-18 04:49:09.248990 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:49:09.248994 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.143) 0:05:40.438 ******* 2026-03-18 04:49:09.248998 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.249003 | orchestrator | 2026-03-18 04:49:09.249007 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:49:09.249011 | orchestrator | Wednesday 18 March 2026 04:49:08 +0000 (0:00:00.138) 0:05:40.576 ******* 2026-03-18 04:49:09.249016 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.249020 | orchestrator | 2026-03-18 04:49:09.249024 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:49:09.249029 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.146) 0:05:40.722 ******* 2026-03-18 04:49:09.249037 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:09.249041 | orchestrator | 2026-03-18 04:49:09.249049 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:49:27.425698 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.128) 0:05:40.851 ******* 2026-03-18 04:49:27.425843 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.425876 | orchestrator | 2026-03-18 04:49:27.425896 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:49:27.425917 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.149) 0:05:41.000 ******* 2026-03-18 04:49:27.425937 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.425955 | orchestrator | 2026-03-18 04:49:27.425973 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:49:27.425985 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.242) 0:05:41.243 ******* 2026-03-18 04:49:27.425996 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426006 | orchestrator | 2026-03-18 04:49:27.426096 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:49:27.426119 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.141) 0:05:41.384 ******* 2026-03-18 04:49:27.426140 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426160 | orchestrator | 2026-03-18 04:49:27.426175 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:49:27.426186 | orchestrator | Wednesday 18 March 2026 04:49:09 +0000 (0:00:00.232) 0:05:41.617 ******* 2026-03-18 04:49:27.426197 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426208 | orchestrator | 2026-03-18 04:49:27.426220 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:49:27.426233 | orchestrator | Wednesday 18 March 2026 04:49:10 +0000 (0:00:00.420) 0:05:42.037 ******* 2026-03-18 04:49:27.426246 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426258 | orchestrator | 2026-03-18 04:49:27.426272 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:49:27.426287 | orchestrator | Wednesday 18 March 2026 04:49:10 +0000 (0:00:00.140) 0:05:42.177 ******* 2026-03-18 04:49:27.426299 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426312 | orchestrator | 2026-03-18 04:49:27.426325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:49:27.426337 | orchestrator | Wednesday 18 March 2026 04:49:10 +0000 (0:00:00.131) 0:05:42.309 ******* 2026-03-18 04:49:27.426350 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426362 | orchestrator | 2026-03-18 04:49:27.426375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:49:27.426387 | orchestrator | Wednesday 18 March 2026 04:49:10 +0000 (0:00:00.145) 0:05:42.455 ******* 2026-03-18 04:49:27.426399 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426440 | orchestrator | 2026-03-18 04:49:27.426454 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:49:27.426466 | orchestrator | Wednesday 18 March 2026 04:49:10 +0000 (0:00:00.148) 0:05:42.603 ******* 2026-03-18 04:49:27.426478 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426490 | orchestrator | 2026-03-18 04:49:27.426503 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:49:27.426516 | orchestrator | Wednesday 18 March 2026 04:49:11 +0000 (0:00:00.150) 0:05:42.754 ******* 2026-03-18 04:49:27.426529 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:49:27.426542 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:49:27.426554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:49:27.426568 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426581 | orchestrator | 2026-03-18 04:49:27.426592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:49:27.426603 | orchestrator | Wednesday 18 March 2026 04:49:11 +0000 (0:00:00.462) 0:05:43.217 ******* 2026-03-18 04:49:27.426639 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:49:27.426665 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:49:27.426676 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:49:27.426687 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426698 | orchestrator | 2026-03-18 04:49:27.426709 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:49:27.426719 | orchestrator | Wednesday 18 March 2026 04:49:12 +0000 (0:00:00.417) 0:05:43.635 ******* 2026-03-18 04:49:27.426730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:49:27.426741 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:49:27.426752 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:49:27.426763 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426773 | orchestrator | 2026-03-18 04:49:27.426784 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:49:27.426795 | orchestrator | Wednesday 18 March 2026 04:49:12 +0000 (0:00:00.411) 0:05:44.046 ******* 2026-03-18 04:49:27.426806 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426817 | orchestrator | 2026-03-18 04:49:27.426827 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:49:27.426838 | orchestrator | Wednesday 18 March 2026 04:49:12 +0000 (0:00:00.153) 0:05:44.199 ******* 2026-03-18 04:49:27.426850 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-18 04:49:27.426861 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.426871 | orchestrator | 2026-03-18 04:49:27.426882 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:49:27.426893 | orchestrator | Wednesday 18 March 2026 04:49:12 +0000 (0:00:00.311) 0:05:44.511 ******* 2026-03-18 04:49:27.426904 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.426915 | orchestrator | 2026-03-18 04:49:27.426926 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:49:27.426937 | orchestrator | Wednesday 18 March 2026 04:49:13 +0000 (0:00:01.086) 0:05:45.598 ******* 2026-03-18 04:49:27.426948 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.426958 | orchestrator | 2026-03-18 04:49:27.426976 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-18 04:49:27.427019 | orchestrator | Wednesday 18 March 2026 04:49:14 +0000 (0:00:00.168) 0:05:45.767 ******* 2026-03-18 04:49:27.427032 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-18 04:49:27.427044 | orchestrator | 2026-03-18 04:49:27.427055 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-18 04:49:27.427066 | orchestrator | Wednesday 18 March 2026 04:49:14 +0000 (0:00:00.266) 0:05:46.033 ******* 2026-03-18 04:49:27.427076 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-18 04:49:27.427087 | orchestrator | 2026-03-18 04:49:27.427098 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-18 04:49:27.427109 | orchestrator | Wednesday 18 March 2026 04:49:16 +0000 (0:00:02.153) 0:05:48.187 ******* 2026-03-18 04:49:27.427120 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.427130 | orchestrator | 2026-03-18 04:49:27.427141 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-18 04:49:27.427152 | orchestrator | Wednesday 18 March 2026 04:49:16 +0000 (0:00:00.192) 0:05:48.379 ******* 2026-03-18 04:49:27.427163 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427173 | orchestrator | 2026-03-18 04:49:27.427184 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-18 04:49:27.427195 | orchestrator | Wednesday 18 March 2026 04:49:16 +0000 (0:00:00.164) 0:05:48.544 ******* 2026-03-18 04:49:27.427206 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427217 | orchestrator | 2026-03-18 04:49:27.427227 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-18 04:49:27.427248 | orchestrator | Wednesday 18 March 2026 04:49:17 +0000 (0:00:00.177) 0:05:48.721 ******* 2026-03-18 04:49:27.427259 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:49:27.427270 | orchestrator | 2026-03-18 04:49:27.427280 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-18 04:49:27.427291 | orchestrator | Wednesday 18 March 2026 04:49:18 +0000 (0:00:01.043) 0:05:49.765 ******* 2026-03-18 04:49:27.427302 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427313 | orchestrator | 2026-03-18 04:49:27.427323 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-18 04:49:27.427334 | orchestrator | Wednesday 18 March 2026 04:49:18 +0000 (0:00:00.649) 0:05:50.414 ******* 2026-03-18 04:49:27.427345 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427356 | orchestrator | 2026-03-18 04:49:27.427366 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-18 04:49:27.427377 | orchestrator | Wednesday 18 March 2026 04:49:19 +0000 (0:00:00.471) 0:05:50.886 ******* 2026-03-18 04:49:27.427388 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427399 | orchestrator | 2026-03-18 04:49:27.427434 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-18 04:49:27.427447 | orchestrator | Wednesday 18 March 2026 04:49:19 +0000 (0:00:00.483) 0:05:51.370 ******* 2026-03-18 04:49:27.427457 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:49:27.427468 | orchestrator | 2026-03-18 04:49:27.427479 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-18 04:49:27.427490 | orchestrator | Wednesday 18 March 2026 04:49:20 +0000 (0:00:00.585) 0:05:51.955 ******* 2026-03-18 04:49:27.427500 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:49:27.427511 | orchestrator | 2026-03-18 04:49:27.427522 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-18 04:49:27.427533 | orchestrator | Wednesday 18 March 2026 04:49:21 +0000 (0:00:01.256) 0:05:53.211 ******* 2026-03-18 04:49:27.427543 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:49:27.427554 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-18 04:49:27.427565 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 04:49:27.427576 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-18 04:49:27.427587 | orchestrator | 2026-03-18 04:49:27.427604 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-18 04:49:27.427615 | orchestrator | Wednesday 18 March 2026 04:49:24 +0000 (0:00:02.798) 0:05:56.010 ******* 2026-03-18 04:49:27.427625 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:49:27.427636 | orchestrator | 2026-03-18 04:49:27.427647 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-18 04:49:27.427658 | orchestrator | Wednesday 18 March 2026 04:49:25 +0000 (0:00:01.098) 0:05:57.109 ******* 2026-03-18 04:49:27.427669 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427679 | orchestrator | 2026-03-18 04:49:27.427690 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-18 04:49:27.427701 | orchestrator | Wednesday 18 March 2026 04:49:25 +0000 (0:00:00.150) 0:05:57.259 ******* 2026-03-18 04:49:27.427711 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427722 | orchestrator | 2026-03-18 04:49:27.427733 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-18 04:49:27.427744 | orchestrator | Wednesday 18 March 2026 04:49:25 +0000 (0:00:00.168) 0:05:57.428 ******* 2026-03-18 04:49:27.427754 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427765 | orchestrator | 2026-03-18 04:49:27.427776 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-18 04:49:27.427786 | orchestrator | Wednesday 18 March 2026 04:49:26 +0000 (0:00:00.727) 0:05:58.155 ******* 2026-03-18 04:49:27.427797 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:49:27.427808 | orchestrator | 2026-03-18 04:49:27.427818 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-18 04:49:27.427836 | orchestrator | Wednesday 18 March 2026 04:49:27 +0000 (0:00:00.496) 0:05:58.651 ******* 2026-03-18 04:49:27.427846 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:49:27.427857 | orchestrator | 2026-03-18 04:49:27.427868 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-18 04:49:27.427878 | orchestrator | Wednesday 18 March 2026 04:49:27 +0000 (0:00:00.146) 0:05:58.797 ******* 2026-03-18 04:49:27.427889 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-18 04:49:27.427900 | orchestrator | 2026-03-18 04:49:27.427918 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-18 04:50:14.586758 | orchestrator | Wednesday 18 March 2026 04:49:27 +0000 (0:00:00.232) 0:05:59.030 ******* 2026-03-18 04:50:14.586845 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:50:14.586855 | orchestrator | 2026-03-18 04:50:14.586862 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-18 04:50:14.586868 | orchestrator | Wednesday 18 March 2026 04:49:27 +0000 (0:00:00.132) 0:05:59.162 ******* 2026-03-18 04:50:14.586874 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:50:14.586879 | orchestrator | 2026-03-18 04:50:14.586885 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-18 04:50:14.586891 | orchestrator | Wednesday 18 March 2026 04:49:27 +0000 (0:00:00.147) 0:05:59.310 ******* 2026-03-18 04:50:14.586896 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-18 04:50:14.586902 | orchestrator | 2026-03-18 04:50:14.586907 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-18 04:50:14.586913 | orchestrator | Wednesday 18 March 2026 04:49:28 +0000 (0:00:00.488) 0:05:59.798 ******* 2026-03-18 04:50:14.586918 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.586924 | orchestrator | 2026-03-18 04:50:14.586930 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-18 04:50:14.586935 | orchestrator | Wednesday 18 March 2026 04:49:29 +0000 (0:00:01.351) 0:06:01.150 ******* 2026-03-18 04:50:14.586940 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.586946 | orchestrator | 2026-03-18 04:50:14.586951 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-18 04:50:14.586957 | orchestrator | Wednesday 18 March 2026 04:49:30 +0000 (0:00:01.007) 0:06:02.157 ******* 2026-03-18 04:50:14.586962 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.586967 | orchestrator | 2026-03-18 04:50:14.586973 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-18 04:50:14.586979 | orchestrator | Wednesday 18 March 2026 04:49:31 +0000 (0:00:01.389) 0:06:03.547 ******* 2026-03-18 04:50:14.586984 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:50:14.586990 | orchestrator | 2026-03-18 04:50:14.586995 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-18 04:50:14.587000 | orchestrator | Wednesday 18 March 2026 04:49:34 +0000 (0:00:02.076) 0:06:05.623 ******* 2026-03-18 04:50:14.587006 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-18 04:50:14.587012 | orchestrator | 2026-03-18 04:50:14.587017 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-18 04:50:14.587023 | orchestrator | Wednesday 18 March 2026 04:49:34 +0000 (0:00:00.223) 0:06:05.847 ******* 2026-03-18 04:50:14.587028 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-18 04:50:14.587034 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.587039 | orchestrator | 2026-03-18 04:50:14.587044 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-18 04:50:14.587050 | orchestrator | Wednesday 18 March 2026 04:49:56 +0000 (0:00:21.924) 0:06:27.771 ******* 2026-03-18 04:50:14.587055 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.587060 | orchestrator | 2026-03-18 04:50:14.587066 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-18 04:50:14.587089 | orchestrator | Wednesday 18 March 2026 04:49:58 +0000 (0:00:02.018) 0:06:29.790 ******* 2026-03-18 04:50:14.587095 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:50:14.587100 | orchestrator | 2026-03-18 04:50:14.587106 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-18 04:50:14.587111 | orchestrator | Wednesday 18 March 2026 04:49:58 +0000 (0:00:00.122) 0:06:29.912 ******* 2026-03-18 04:50:14.587129 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:50:14.587136 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:50:14.587142 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-18 04:50:14.587148 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-18 04:50:14.587167 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-18 04:50:14.587174 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}])  2026-03-18 04:50:14.587181 | orchestrator | 2026-03-18 04:50:14.587186 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-18 04:50:14.587192 | orchestrator | Wednesday 18 March 2026 04:50:07 +0000 (0:00:08.773) 0:06:38.685 ******* 2026-03-18 04:50:14.587198 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:50:14.587203 | orchestrator | 2026-03-18 04:50:14.587208 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:50:14.587214 | orchestrator | Wednesday 18 March 2026 04:50:08 +0000 (0:00:01.394) 0:06:40.080 ******* 2026-03-18 04:50:14.587219 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:50:14.587225 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-18 04:50:14.587230 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-18 04:50:14.587235 | orchestrator | 2026-03-18 04:50:14.587241 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:50:14.587251 | orchestrator | Wednesday 18 March 2026 04:50:09 +0000 (0:00:01.212) 0:06:41.292 ******* 2026-03-18 04:50:14.587267 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:50:14.587277 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:50:14.587298 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:50:14.587309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:50:14.587315 | orchestrator | 2026-03-18 04:50:14.587323 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-18 04:50:14.587332 | orchestrator | Wednesday 18 March 2026 04:50:10 +0000 (0:00:00.443) 0:06:41.735 ******* 2026-03-18 04:50:14.587341 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:50:14.587349 | orchestrator | 2026-03-18 04:50:14.587356 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-18 04:50:14.587365 | orchestrator | Wednesday 18 March 2026 04:50:10 +0000 (0:00:00.123) 0:06:41.859 ******* 2026-03-18 04:50:14.587373 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:50:14.587382 | orchestrator | 2026-03-18 04:50:14.587391 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-18 04:50:14.587399 | orchestrator | 2026-03-18 04:50:14.587408 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-18 04:50:14.587416 | orchestrator | Wednesday 18 March 2026 04:50:11 +0000 (0:00:01.687) 0:06:43.547 ******* 2026-03-18 04:50:14.587425 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587435 | orchestrator | 2026-03-18 04:50:14.587496 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-18 04:50:14.587508 | orchestrator | Wednesday 18 March 2026 04:50:12 +0000 (0:00:00.445) 0:06:43.993 ******* 2026-03-18 04:50:14.587517 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587527 | orchestrator | 2026-03-18 04:50:14.587542 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-18 04:50:14.587552 | orchestrator | Wednesday 18 March 2026 04:50:12 +0000 (0:00:00.140) 0:06:44.133 ******* 2026-03-18 04:50:14.587560 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:14.587570 | orchestrator | 2026-03-18 04:50:14.587576 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-18 04:50:14.587586 | orchestrator | Wednesday 18 March 2026 04:50:12 +0000 (0:00:00.123) 0:06:44.257 ******* 2026-03-18 04:50:14.587595 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587604 | orchestrator | 2026-03-18 04:50:14.587613 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:50:14.587622 | orchestrator | Wednesday 18 March 2026 04:50:12 +0000 (0:00:00.160) 0:06:44.418 ******* 2026-03-18 04:50:14.587632 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-18 04:50:14.587642 | orchestrator | 2026-03-18 04:50:14.587651 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:50:14.587660 | orchestrator | Wednesday 18 March 2026 04:50:13 +0000 (0:00:00.271) 0:06:44.689 ******* 2026-03-18 04:50:14.587669 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587676 | orchestrator | 2026-03-18 04:50:14.587685 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:50:14.587695 | orchestrator | Wednesday 18 March 2026 04:50:13 +0000 (0:00:00.440) 0:06:45.130 ******* 2026-03-18 04:50:14.587704 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587713 | orchestrator | 2026-03-18 04:50:14.587722 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:50:14.587730 | orchestrator | Wednesday 18 March 2026 04:50:13 +0000 (0:00:00.397) 0:06:45.528 ******* 2026-03-18 04:50:14.587740 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587749 | orchestrator | 2026-03-18 04:50:14.587758 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:50:14.587767 | orchestrator | Wednesday 18 March 2026 04:50:14 +0000 (0:00:00.495) 0:06:46.023 ******* 2026-03-18 04:50:14.587776 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:14.587786 | orchestrator | 2026-03-18 04:50:14.587795 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:50:14.587819 | orchestrator | Wednesday 18 March 2026 04:50:14 +0000 (0:00:00.164) 0:06:46.188 ******* 2026-03-18 04:50:22.889021 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.889155 | orchestrator | 2026-03-18 04:50:22.889179 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:50:22.889202 | orchestrator | Wednesday 18 March 2026 04:50:14 +0000 (0:00:00.150) 0:06:46.339 ******* 2026-03-18 04:50:22.889220 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.889236 | orchestrator | 2026-03-18 04:50:22.889254 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:50:22.889275 | orchestrator | Wednesday 18 March 2026 04:50:14 +0000 (0:00:00.177) 0:06:46.517 ******* 2026-03-18 04:50:22.889292 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.889311 | orchestrator | 2026-03-18 04:50:22.889331 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:50:22.889348 | orchestrator | Wednesday 18 March 2026 04:50:15 +0000 (0:00:00.177) 0:06:46.694 ******* 2026-03-18 04:50:22.889367 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.889386 | orchestrator | 2026-03-18 04:50:22.889404 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:50:22.889423 | orchestrator | Wednesday 18 March 2026 04:50:15 +0000 (0:00:00.135) 0:06:46.830 ******* 2026-03-18 04:50:22.889440 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:50:22.889496 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:50:22.889510 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:22.889522 | orchestrator | 2026-03-18 04:50:22.889533 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:50:22.889545 | orchestrator | Wednesday 18 March 2026 04:50:15 +0000 (0:00:00.685) 0:06:47.516 ******* 2026-03-18 04:50:22.889556 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.889567 | orchestrator | 2026-03-18 04:50:22.889578 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:50:22.889589 | orchestrator | Wednesday 18 March 2026 04:50:16 +0000 (0:00:00.250) 0:06:47.767 ******* 2026-03-18 04:50:22.889600 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:50:22.889611 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:50:22.889622 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:22.889633 | orchestrator | 2026-03-18 04:50:22.889643 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:50:22.889654 | orchestrator | Wednesday 18 March 2026 04:50:18 +0000 (0:00:02.219) 0:06:49.986 ******* 2026-03-18 04:50:22.889665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:50:22.889676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:50:22.889687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:50:22.889698 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.889709 | orchestrator | 2026-03-18 04:50:22.889720 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:50:22.889730 | orchestrator | Wednesday 18 March 2026 04:50:18 +0000 (0:00:00.426) 0:06:50.412 ******* 2026-03-18 04:50:22.889744 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889776 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889788 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889825 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.889837 | orchestrator | 2026-03-18 04:50:22.889848 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:50:22.889859 | orchestrator | Wednesday 18 March 2026 04:50:19 +0000 (0:00:00.956) 0:06:51.369 ******* 2026-03-18 04:50:22.889872 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889886 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889919 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:22.889932 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.889943 | orchestrator | 2026-03-18 04:50:22.889954 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:50:22.889970 | orchestrator | Wednesday 18 March 2026 04:50:19 +0000 (0:00:00.185) 0:06:51.555 ******* 2026-03-18 04:50:22.889992 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:50:16.693675', 'end': '2026-03-18 04:50:16.738028', 'delta': '0:00:00.044353', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:50:22.890097 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:50:17.234419', 'end': '2026-03-18 04:50:17.285597', 'delta': '0:00:00.051178', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:50:22.890139 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'fc8e238828f1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:50:18.177145', 'end': '2026-03-18 04:50:18.219630', 'delta': '0:00:00.042485', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['fc8e238828f1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:50:22.890187 | orchestrator | 2026-03-18 04:50:22.890199 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:50:22.890210 | orchestrator | Wednesday 18 March 2026 04:50:20 +0000 (0:00:00.241) 0:06:51.797 ******* 2026-03-18 04:50:22.890221 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.890232 | orchestrator | 2026-03-18 04:50:22.890243 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:50:22.890371 | orchestrator | Wednesday 18 March 2026 04:50:21 +0000 (0:00:00.956) 0:06:52.754 ******* 2026-03-18 04:50:22.890391 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.890407 | orchestrator | 2026-03-18 04:50:22.890424 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:50:22.890441 | orchestrator | Wednesday 18 March 2026 04:50:21 +0000 (0:00:00.263) 0:06:53.017 ******* 2026-03-18 04:50:22.890487 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.890508 | orchestrator | 2026-03-18 04:50:22.890528 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:50:22.890547 | orchestrator | Wednesday 18 March 2026 04:50:21 +0000 (0:00:00.155) 0:06:53.173 ******* 2026-03-18 04:50:22.890568 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:50:22.890587 | orchestrator | 2026-03-18 04:50:22.890607 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:50:22.890626 | orchestrator | Wednesday 18 March 2026 04:50:22 +0000 (0:00:01.035) 0:06:54.209 ******* 2026-03-18 04:50:22.890645 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:22.890662 | orchestrator | 2026-03-18 04:50:22.890682 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:50:22.890703 | orchestrator | Wednesday 18 March 2026 04:50:22 +0000 (0:00:00.152) 0:06:54.361 ******* 2026-03-18 04:50:22.890723 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:22.890742 | orchestrator | 2026-03-18 04:50:22.890761 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:50:22.890800 | orchestrator | Wednesday 18 March 2026 04:50:22 +0000 (0:00:00.138) 0:06:54.499 ******* 2026-03-18 04:50:24.929006 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929117 | orchestrator | 2026-03-18 04:50:24.929135 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:50:24.929148 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.256) 0:06:54.756 ******* 2026-03-18 04:50:24.929160 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929171 | orchestrator | 2026-03-18 04:50:24.929182 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:50:24.929194 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.119) 0:06:54.875 ******* 2026-03-18 04:50:24.929204 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929215 | orchestrator | 2026-03-18 04:50:24.929226 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:50:24.929237 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.147) 0:06:55.023 ******* 2026-03-18 04:50:24.929248 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929259 | orchestrator | 2026-03-18 04:50:24.929270 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:50:24.929281 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.141) 0:06:55.164 ******* 2026-03-18 04:50:24.929292 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929302 | orchestrator | 2026-03-18 04:50:24.929313 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:50:24.929347 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.138) 0:06:55.303 ******* 2026-03-18 04:50:24.929359 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929370 | orchestrator | 2026-03-18 04:50:24.929381 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:50:24.929393 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.157) 0:06:55.461 ******* 2026-03-18 04:50:24.929404 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929414 | orchestrator | 2026-03-18 04:50:24.929425 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:50:24.929437 | orchestrator | Wednesday 18 March 2026 04:50:23 +0000 (0:00:00.153) 0:06:55.614 ******* 2026-03-18 04:50:24.929447 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929484 | orchestrator | 2026-03-18 04:50:24.929496 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:50:24.929507 | orchestrator | Wednesday 18 March 2026 04:50:24 +0000 (0:00:00.427) 0:06:56.042 ******* 2026-03-18 04:50:24.929520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:50:24.929595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:50:24.929687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:50:24.929713 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:24.929726 | orchestrator | 2026-03-18 04:50:24.929738 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:50:24.929751 | orchestrator | Wednesday 18 March 2026 04:50:24 +0000 (0:00:00.263) 0:06:56.305 ******* 2026-03-18 04:50:24.929773 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059933 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059946 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059970 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059976 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.059982 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.060016 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.060027 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.060033 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:50:26.060039 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:26.060046 | orchestrator | 2026-03-18 04:50:26.060052 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:50:26.060059 | orchestrator | Wednesday 18 March 2026 04:50:24 +0000 (0:00:00.236) 0:06:56.542 ******* 2026-03-18 04:50:26.060069 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:26.060075 | orchestrator | 2026-03-18 04:50:26.060080 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:50:26.060085 | orchestrator | Wednesday 18 March 2026 04:50:25 +0000 (0:00:00.513) 0:06:57.055 ******* 2026-03-18 04:50:26.060090 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:26.060095 | orchestrator | 2026-03-18 04:50:26.060100 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:50:26.060105 | orchestrator | Wednesday 18 March 2026 04:50:25 +0000 (0:00:00.139) 0:06:57.194 ******* 2026-03-18 04:50:26.060110 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:26.060115 | orchestrator | 2026-03-18 04:50:26.060120 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:50:26.060129 | orchestrator | Wednesday 18 March 2026 04:50:26 +0000 (0:00:00.473) 0:06:57.668 ******* 2026-03-18 04:50:41.921415 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.921563 | orchestrator | 2026-03-18 04:50:41.921582 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:50:41.921597 | orchestrator | Wednesday 18 March 2026 04:50:26 +0000 (0:00:00.153) 0:06:57.822 ******* 2026-03-18 04:50:41.921608 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.921619 | orchestrator | 2026-03-18 04:50:41.921630 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:50:41.921641 | orchestrator | Wednesday 18 March 2026 04:50:26 +0000 (0:00:00.253) 0:06:58.075 ******* 2026-03-18 04:50:41.921652 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.921663 | orchestrator | 2026-03-18 04:50:41.921674 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:50:41.921685 | orchestrator | Wednesday 18 March 2026 04:50:26 +0000 (0:00:00.145) 0:06:58.221 ******* 2026-03-18 04:50:41.921696 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-18 04:50:41.921707 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-18 04:50:41.921718 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:41.921729 | orchestrator | 2026-03-18 04:50:41.921739 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:50:41.921750 | orchestrator | Wednesday 18 March 2026 04:50:27 +0000 (0:00:00.995) 0:06:59.216 ******* 2026-03-18 04:50:41.921761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:50:41.921792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:50:41.921833 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:50:41.921854 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.921872 | orchestrator | 2026-03-18 04:50:41.921890 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:50:41.921907 | orchestrator | Wednesday 18 March 2026 04:50:27 +0000 (0:00:00.182) 0:06:59.399 ******* 2026-03-18 04:50:41.921925 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.921942 | orchestrator | 2026-03-18 04:50:41.921960 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:50:41.921978 | orchestrator | Wednesday 18 March 2026 04:50:27 +0000 (0:00:00.140) 0:06:59.540 ******* 2026-03-18 04:50:41.921997 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:50:41.922018 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:50:41.922112 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:41.922125 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:50:41.922138 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:50:41.922151 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:50:41.922181 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:50:41.922218 | orchestrator | 2026-03-18 04:50:41.922231 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:50:41.922243 | orchestrator | Wednesday 18 March 2026 04:50:29 +0000 (0:00:01.222) 0:07:00.763 ******* 2026-03-18 04:50:41.922255 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:50:41.922268 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:50:41.922281 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:41.922293 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:50:41.922305 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:50:41.922317 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:50:41.922327 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:50:41.922338 | orchestrator | 2026-03-18 04:50:41.922349 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-18 04:50:41.922360 | orchestrator | Wednesday 18 March 2026 04:50:31 +0000 (0:00:02.230) 0:07:02.993 ******* 2026-03-18 04:50:41.922370 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922381 | orchestrator | 2026-03-18 04:50:41.922391 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-18 04:50:41.922402 | orchestrator | Wednesday 18 March 2026 04:50:31 +0000 (0:00:00.245) 0:07:03.239 ******* 2026-03-18 04:50:41.922412 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922423 | orchestrator | 2026-03-18 04:50:41.922434 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-18 04:50:41.922444 | orchestrator | Wednesday 18 March 2026 04:50:31 +0000 (0:00:00.233) 0:07:03.472 ******* 2026-03-18 04:50:41.922455 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922608 | orchestrator | 2026-03-18 04:50:41.922661 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-18 04:50:41.922689 | orchestrator | Wednesday 18 March 2026 04:50:32 +0000 (0:00:00.164) 0:07:03.636 ******* 2026-03-18 04:50:41.922710 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922728 | orchestrator | 2026-03-18 04:50:41.922743 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-18 04:50:41.922759 | orchestrator | Wednesday 18 March 2026 04:50:32 +0000 (0:00:00.254) 0:07:03.890 ******* 2026-03-18 04:50:41.922776 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922793 | orchestrator | 2026-03-18 04:50:41.922810 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-18 04:50:41.922826 | orchestrator | Wednesday 18 March 2026 04:50:32 +0000 (0:00:00.142) 0:07:04.033 ******* 2026-03-18 04:50:41.922872 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:50:41.922892 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:50:41.922909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:50:41.922927 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.922943 | orchestrator | 2026-03-18 04:50:41.922954 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-18 04:50:41.922964 | orchestrator | Wednesday 18 March 2026 04:50:32 +0000 (0:00:00.431) 0:07:04.464 ******* 2026-03-18 04:50:41.922973 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-18 04:50:41.922983 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-18 04:50:41.922992 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-18 04:50:41.923002 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-18 04:50:41.923011 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-18 04:50:41.923034 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-18 04:50:41.923044 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923053 | orchestrator | 2026-03-18 04:50:41.923063 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-18 04:50:41.923073 | orchestrator | Wednesday 18 March 2026 04:50:33 +0000 (0:00:01.110) 0:07:05.575 ******* 2026-03-18 04:50:41.923083 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:41.923093 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:50:41.923102 | orchestrator | 2026-03-18 04:50:41.923112 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-18 04:50:41.923122 | orchestrator | Wednesday 18 March 2026 04:50:36 +0000 (0:00:02.495) 0:07:08.070 ******* 2026-03-18 04:50:41.923131 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:50:41.923141 | orchestrator | 2026-03-18 04:50:41.923150 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:50:41.923160 | orchestrator | Wednesday 18 March 2026 04:50:37 +0000 (0:00:01.457) 0:07:09.528 ******* 2026-03-18 04:50:41.923169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-18 04:50:41.923180 | orchestrator | 2026-03-18 04:50:41.923190 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:50:41.923199 | orchestrator | Wednesday 18 March 2026 04:50:38 +0000 (0:00:00.205) 0:07:09.734 ******* 2026-03-18 04:50:41.923209 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-18 04:50:41.923218 | orchestrator | 2026-03-18 04:50:41.923228 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:50:41.923237 | orchestrator | Wednesday 18 March 2026 04:50:38 +0000 (0:00:00.504) 0:07:10.238 ******* 2026-03-18 04:50:41.923255 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:41.923265 | orchestrator | 2026-03-18 04:50:41.923275 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:50:41.923284 | orchestrator | Wednesday 18 March 2026 04:50:39 +0000 (0:00:00.536) 0:07:10.774 ******* 2026-03-18 04:50:41.923294 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923304 | orchestrator | 2026-03-18 04:50:41.923313 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:50:41.923323 | orchestrator | Wednesday 18 March 2026 04:50:39 +0000 (0:00:00.141) 0:07:10.915 ******* 2026-03-18 04:50:41.923332 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923342 | orchestrator | 2026-03-18 04:50:41.923351 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:50:41.923361 | orchestrator | Wednesday 18 March 2026 04:50:39 +0000 (0:00:00.148) 0:07:11.064 ******* 2026-03-18 04:50:41.923370 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923380 | orchestrator | 2026-03-18 04:50:41.923389 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:50:41.923399 | orchestrator | Wednesday 18 March 2026 04:50:39 +0000 (0:00:00.168) 0:07:11.233 ******* 2026-03-18 04:50:41.923409 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:41.923418 | orchestrator | 2026-03-18 04:50:41.923428 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:50:41.923437 | orchestrator | Wednesday 18 March 2026 04:50:40 +0000 (0:00:00.604) 0:07:11.837 ******* 2026-03-18 04:50:41.923447 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923457 | orchestrator | 2026-03-18 04:50:41.923493 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:50:41.923504 | orchestrator | Wednesday 18 March 2026 04:50:40 +0000 (0:00:00.160) 0:07:11.998 ******* 2026-03-18 04:50:41.923513 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923523 | orchestrator | 2026-03-18 04:50:41.923532 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:50:41.923542 | orchestrator | Wednesday 18 March 2026 04:50:40 +0000 (0:00:00.128) 0:07:12.127 ******* 2026-03-18 04:50:41.923558 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:41.923567 | orchestrator | 2026-03-18 04:50:41.923577 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:50:41.923586 | orchestrator | Wednesday 18 March 2026 04:50:41 +0000 (0:00:00.551) 0:07:12.678 ******* 2026-03-18 04:50:41.923595 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:41.923605 | orchestrator | 2026-03-18 04:50:41.923614 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:50:41.923624 | orchestrator | Wednesday 18 March 2026 04:50:41 +0000 (0:00:00.568) 0:07:13.246 ******* 2026-03-18 04:50:41.923633 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:41.923643 | orchestrator | 2026-03-18 04:50:41.923652 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:50:41.923662 | orchestrator | Wednesday 18 March 2026 04:50:41 +0000 (0:00:00.136) 0:07:13.383 ******* 2026-03-18 04:50:41.923679 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.968187 | orchestrator | 2026-03-18 04:50:52.968302 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:50:52.968320 | orchestrator | Wednesday 18 March 2026 04:50:41 +0000 (0:00:00.144) 0:07:13.528 ******* 2026-03-18 04:50:52.968333 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968345 | orchestrator | 2026-03-18 04:50:52.968357 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:50:52.968368 | orchestrator | Wednesday 18 March 2026 04:50:42 +0000 (0:00:00.435) 0:07:13.963 ******* 2026-03-18 04:50:52.968379 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968389 | orchestrator | 2026-03-18 04:50:52.968400 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:50:52.968411 | orchestrator | Wednesday 18 March 2026 04:50:42 +0000 (0:00:00.137) 0:07:14.101 ******* 2026-03-18 04:50:52.968422 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968433 | orchestrator | 2026-03-18 04:50:52.968444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:50:52.968454 | orchestrator | Wednesday 18 March 2026 04:50:42 +0000 (0:00:00.173) 0:07:14.274 ******* 2026-03-18 04:50:52.968465 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968525 | orchestrator | 2026-03-18 04:50:52.968539 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:50:52.968550 | orchestrator | Wednesday 18 March 2026 04:50:42 +0000 (0:00:00.133) 0:07:14.407 ******* 2026-03-18 04:50:52.968561 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968571 | orchestrator | 2026-03-18 04:50:52.968582 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:50:52.968593 | orchestrator | Wednesday 18 March 2026 04:50:42 +0000 (0:00:00.144) 0:07:14.552 ******* 2026-03-18 04:50:52.968603 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.968615 | orchestrator | 2026-03-18 04:50:52.968626 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:50:52.968637 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.149) 0:07:14.701 ******* 2026-03-18 04:50:52.968648 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.968659 | orchestrator | 2026-03-18 04:50:52.968670 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:50:52.968680 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.170) 0:07:14.871 ******* 2026-03-18 04:50:52.968691 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.968702 | orchestrator | 2026-03-18 04:50:52.968712 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:50:52.968724 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.250) 0:07:15.122 ******* 2026-03-18 04:50:52.968736 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968748 | orchestrator | 2026-03-18 04:50:52.968760 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:50:52.968773 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.137) 0:07:15.259 ******* 2026-03-18 04:50:52.968811 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968824 | orchestrator | 2026-03-18 04:50:52.968837 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:50:52.968864 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.153) 0:07:15.412 ******* 2026-03-18 04:50:52.968876 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968888 | orchestrator | 2026-03-18 04:50:52.968900 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:50:52.968912 | orchestrator | Wednesday 18 March 2026 04:50:43 +0000 (0:00:00.151) 0:07:15.564 ******* 2026-03-18 04:50:52.968924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968936 | orchestrator | 2026-03-18 04:50:52.968948 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:50:52.968961 | orchestrator | Wednesday 18 March 2026 04:50:44 +0000 (0:00:00.134) 0:07:15.698 ******* 2026-03-18 04:50:52.968973 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.968985 | orchestrator | 2026-03-18 04:50:52.968997 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:50:52.969009 | orchestrator | Wednesday 18 March 2026 04:50:44 +0000 (0:00:00.136) 0:07:15.835 ******* 2026-03-18 04:50:52.969021 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969033 | orchestrator | 2026-03-18 04:50:52.969046 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:50:52.969059 | orchestrator | Wednesday 18 March 2026 04:50:44 +0000 (0:00:00.456) 0:07:16.291 ******* 2026-03-18 04:50:52.969071 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969083 | orchestrator | 2026-03-18 04:50:52.969094 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:50:52.969106 | orchestrator | Wednesday 18 March 2026 04:50:44 +0000 (0:00:00.134) 0:07:16.426 ******* 2026-03-18 04:50:52.969116 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969127 | orchestrator | 2026-03-18 04:50:52.969138 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:50:52.969148 | orchestrator | Wednesday 18 March 2026 04:50:44 +0000 (0:00:00.134) 0:07:16.560 ******* 2026-03-18 04:50:52.969159 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969170 | orchestrator | 2026-03-18 04:50:52.969180 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:50:52.969191 | orchestrator | Wednesday 18 March 2026 04:50:45 +0000 (0:00:00.181) 0:07:16.742 ******* 2026-03-18 04:50:52.969202 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969212 | orchestrator | 2026-03-18 04:50:52.969222 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:50:52.969233 | orchestrator | Wednesday 18 March 2026 04:50:45 +0000 (0:00:00.148) 0:07:16.891 ******* 2026-03-18 04:50:52.969244 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969254 | orchestrator | 2026-03-18 04:50:52.969265 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:50:52.969275 | orchestrator | Wednesday 18 March 2026 04:50:45 +0000 (0:00:00.154) 0:07:17.045 ******* 2026-03-18 04:50:52.969286 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969297 | orchestrator | 2026-03-18 04:50:52.969324 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:50:52.969335 | orchestrator | Wednesday 18 March 2026 04:50:45 +0000 (0:00:00.222) 0:07:17.267 ******* 2026-03-18 04:50:52.969346 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.969356 | orchestrator | 2026-03-18 04:50:52.969367 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:50:52.969378 | orchestrator | Wednesday 18 March 2026 04:50:46 +0000 (0:00:00.964) 0:07:18.232 ******* 2026-03-18 04:50:52.969389 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.969399 | orchestrator | 2026-03-18 04:50:52.969410 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:50:52.969429 | orchestrator | Wednesday 18 March 2026 04:50:48 +0000 (0:00:01.417) 0:07:19.649 ******* 2026-03-18 04:50:52.969440 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-18 04:50:52.969452 | orchestrator | 2026-03-18 04:50:52.969463 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:50:52.969474 | orchestrator | Wednesday 18 March 2026 04:50:48 +0000 (0:00:00.243) 0:07:19.893 ******* 2026-03-18 04:50:52.969508 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969519 | orchestrator | 2026-03-18 04:50:52.969530 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:50:52.969540 | orchestrator | Wednesday 18 March 2026 04:50:48 +0000 (0:00:00.137) 0:07:20.031 ******* 2026-03-18 04:50:52.969551 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969562 | orchestrator | 2026-03-18 04:50:52.969572 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:50:52.969583 | orchestrator | Wednesday 18 March 2026 04:50:48 +0000 (0:00:00.467) 0:07:20.499 ******* 2026-03-18 04:50:52.969594 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:50:52.969604 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:50:52.969615 | orchestrator | 2026-03-18 04:50:52.969626 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:50:52.969637 | orchestrator | Wednesday 18 March 2026 04:50:49 +0000 (0:00:00.863) 0:07:21.363 ******* 2026-03-18 04:50:52.969647 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.969658 | orchestrator | 2026-03-18 04:50:52.969669 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:50:52.969680 | orchestrator | Wednesday 18 March 2026 04:50:50 +0000 (0:00:00.498) 0:07:21.861 ******* 2026-03-18 04:50:52.969690 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969701 | orchestrator | 2026-03-18 04:50:52.969712 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:50:52.969722 | orchestrator | Wednesday 18 March 2026 04:50:50 +0000 (0:00:00.168) 0:07:22.029 ******* 2026-03-18 04:50:52.969733 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969744 | orchestrator | 2026-03-18 04:50:52.969754 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:50:52.969765 | orchestrator | Wednesday 18 March 2026 04:50:50 +0000 (0:00:00.140) 0:07:22.170 ******* 2026-03-18 04:50:52.969781 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969792 | orchestrator | 2026-03-18 04:50:52.969803 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:50:52.969813 | orchestrator | Wednesday 18 March 2026 04:50:50 +0000 (0:00:00.129) 0:07:22.299 ******* 2026-03-18 04:50:52.969824 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-18 04:50:52.969834 | orchestrator | 2026-03-18 04:50:52.969845 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:50:52.969856 | orchestrator | Wednesday 18 March 2026 04:50:50 +0000 (0:00:00.250) 0:07:22.549 ******* 2026-03-18 04:50:52.969866 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:50:52.969877 | orchestrator | 2026-03-18 04:50:52.969887 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:50:52.969898 | orchestrator | Wednesday 18 March 2026 04:50:51 +0000 (0:00:00.741) 0:07:23.291 ******* 2026-03-18 04:50:52.969908 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:50:52.969919 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:50:52.969930 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:50:52.969940 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.969951 | orchestrator | 2026-03-18 04:50:52.969961 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:50:52.969979 | orchestrator | Wednesday 18 March 2026 04:50:51 +0000 (0:00:00.176) 0:07:23.467 ******* 2026-03-18 04:50:52.969989 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.970000 | orchestrator | 2026-03-18 04:50:52.970010 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:50:52.970084 | orchestrator | Wednesday 18 March 2026 04:50:51 +0000 (0:00:00.125) 0:07:23.593 ******* 2026-03-18 04:50:52.970096 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.970107 | orchestrator | 2026-03-18 04:50:52.970118 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:50:52.970129 | orchestrator | Wednesday 18 March 2026 04:50:52 +0000 (0:00:00.195) 0:07:23.788 ******* 2026-03-18 04:50:52.970139 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.970150 | orchestrator | 2026-03-18 04:50:52.970196 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:50:52.970207 | orchestrator | Wednesday 18 March 2026 04:50:52 +0000 (0:00:00.155) 0:07:23.944 ******* 2026-03-18 04:50:52.970218 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.970228 | orchestrator | 2026-03-18 04:50:52.970239 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:50:52.970250 | orchestrator | Wednesday 18 March 2026 04:50:52 +0000 (0:00:00.462) 0:07:24.406 ******* 2026-03-18 04:50:52.970260 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:50:52.970271 | orchestrator | 2026-03-18 04:50:52.970292 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:51:06.429666 | orchestrator | Wednesday 18 March 2026 04:50:52 +0000 (0:00:00.166) 0:07:24.573 ******* 2026-03-18 04:51:06.429784 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:06.429800 | orchestrator | 2026-03-18 04:51:06.429813 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:51:06.429825 | orchestrator | Wednesday 18 March 2026 04:50:54 +0000 (0:00:01.631) 0:07:26.204 ******* 2026-03-18 04:51:06.429836 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:06.429847 | orchestrator | 2026-03-18 04:51:06.429858 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:51:06.429869 | orchestrator | Wednesday 18 March 2026 04:50:54 +0000 (0:00:00.149) 0:07:26.353 ******* 2026-03-18 04:51:06.429880 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-18 04:51:06.429891 | orchestrator | 2026-03-18 04:51:06.429902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:51:06.429913 | orchestrator | Wednesday 18 March 2026 04:50:54 +0000 (0:00:00.237) 0:07:26.590 ******* 2026-03-18 04:51:06.429924 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.429936 | orchestrator | 2026-03-18 04:51:06.429947 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:51:06.429957 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.173) 0:07:26.763 ******* 2026-03-18 04:51:06.429969 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.429979 | orchestrator | 2026-03-18 04:51:06.429990 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:51:06.430001 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.154) 0:07:26.918 ******* 2026-03-18 04:51:06.430012 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430120 | orchestrator | 2026-03-18 04:51:06.430132 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:51:06.430143 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.171) 0:07:27.090 ******* 2026-03-18 04:51:06.430154 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430165 | orchestrator | 2026-03-18 04:51:06.430175 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:51:06.430186 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.144) 0:07:27.234 ******* 2026-03-18 04:51:06.430200 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430212 | orchestrator | 2026-03-18 04:51:06.430225 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:51:06.430259 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.149) 0:07:27.384 ******* 2026-03-18 04:51:06.430272 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430285 | orchestrator | 2026-03-18 04:51:06.430297 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:51:06.430310 | orchestrator | Wednesday 18 March 2026 04:50:55 +0000 (0:00:00.159) 0:07:27.543 ******* 2026-03-18 04:51:06.430323 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430336 | orchestrator | 2026-03-18 04:51:06.430349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:51:06.430376 | orchestrator | Wednesday 18 March 2026 04:50:56 +0000 (0:00:00.151) 0:07:27.694 ******* 2026-03-18 04:51:06.430389 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430402 | orchestrator | 2026-03-18 04:51:06.430414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:51:06.430425 | orchestrator | Wednesday 18 March 2026 04:50:56 +0000 (0:00:00.459) 0:07:28.154 ******* 2026-03-18 04:51:06.430436 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:06.430447 | orchestrator | 2026-03-18 04:51:06.430457 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:51:06.430470 | orchestrator | Wednesday 18 March 2026 04:50:56 +0000 (0:00:00.233) 0:07:28.387 ******* 2026-03-18 04:51:06.430551 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-18 04:51:06.430570 | orchestrator | 2026-03-18 04:51:06.430583 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:51:06.430601 | orchestrator | Wednesday 18 March 2026 04:50:56 +0000 (0:00:00.214) 0:07:28.602 ******* 2026-03-18 04:51:06.430617 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-18 04:51:06.430636 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-18 04:51:06.430655 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-18 04:51:06.430672 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-18 04:51:06.430689 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-18 04:51:06.430709 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-18 04:51:06.430727 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-18 04:51:06.430745 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:51:06.430756 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:51:06.430773 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:51:06.430791 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:51:06.430809 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:51:06.430829 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:51:06.430840 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:51:06.430851 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-18 04:51:06.430862 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-18 04:51:06.430873 | orchestrator | 2026-03-18 04:51:06.430884 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:51:06.430895 | orchestrator | Wednesday 18 March 2026 04:51:02 +0000 (0:00:05.852) 0:07:34.454 ******* 2026-03-18 04:51:06.430906 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430916 | orchestrator | 2026-03-18 04:51:06.430927 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:51:06.430966 | orchestrator | Wednesday 18 March 2026 04:51:02 +0000 (0:00:00.143) 0:07:34.597 ******* 2026-03-18 04:51:06.430982 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.430994 | orchestrator | 2026-03-18 04:51:06.431004 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:51:06.431027 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.166) 0:07:34.763 ******* 2026-03-18 04:51:06.431038 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431051 | orchestrator | 2026-03-18 04:51:06.431070 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:51:06.431088 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.142) 0:07:34.906 ******* 2026-03-18 04:51:06.431105 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431122 | orchestrator | 2026-03-18 04:51:06.431140 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:51:06.431160 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.152) 0:07:35.059 ******* 2026-03-18 04:51:06.431172 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431183 | orchestrator | 2026-03-18 04:51:06.431194 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:51:06.431204 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.139) 0:07:35.198 ******* 2026-03-18 04:51:06.431215 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431225 | orchestrator | 2026-03-18 04:51:06.431236 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:51:06.431248 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.138) 0:07:35.336 ******* 2026-03-18 04:51:06.431258 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431275 | orchestrator | 2026-03-18 04:51:06.431292 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:51:06.431310 | orchestrator | Wednesday 18 March 2026 04:51:03 +0000 (0:00:00.133) 0:07:35.470 ******* 2026-03-18 04:51:06.431329 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431341 | orchestrator | 2026-03-18 04:51:06.431352 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:51:06.431363 | orchestrator | Wednesday 18 March 2026 04:51:04 +0000 (0:00:00.430) 0:07:35.900 ******* 2026-03-18 04:51:06.431374 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431384 | orchestrator | 2026-03-18 04:51:06.431395 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:51:06.431406 | orchestrator | Wednesday 18 March 2026 04:51:04 +0000 (0:00:00.132) 0:07:36.033 ******* 2026-03-18 04:51:06.431416 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431427 | orchestrator | 2026-03-18 04:51:06.431438 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:51:06.431448 | orchestrator | Wednesday 18 March 2026 04:51:04 +0000 (0:00:00.154) 0:07:36.187 ******* 2026-03-18 04:51:06.431459 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431470 | orchestrator | 2026-03-18 04:51:06.431514 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:51:06.431535 | orchestrator | Wednesday 18 March 2026 04:51:04 +0000 (0:00:00.143) 0:07:36.331 ******* 2026-03-18 04:51:06.431554 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431569 | orchestrator | 2026-03-18 04:51:06.431580 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:51:06.431591 | orchestrator | Wednesday 18 March 2026 04:51:04 +0000 (0:00:00.146) 0:07:36.477 ******* 2026-03-18 04:51:06.431602 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431612 | orchestrator | 2026-03-18 04:51:06.431623 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:51:06.431634 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.264) 0:07:36.741 ******* 2026-03-18 04:51:06.431644 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431660 | orchestrator | 2026-03-18 04:51:06.431678 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:51:06.431697 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.144) 0:07:36.885 ******* 2026-03-18 04:51:06.431715 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431733 | orchestrator | 2026-03-18 04:51:06.431784 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:51:06.431807 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.247) 0:07:37.133 ******* 2026-03-18 04:51:06.431819 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.431888 | orchestrator | 2026-03-18 04:51:06.432000 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:51:06.432050 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.148) 0:07:37.281 ******* 2026-03-18 04:51:06.432061 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.432072 | orchestrator | 2026-03-18 04:51:06.432083 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:51:06.432096 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.154) 0:07:37.436 ******* 2026-03-18 04:51:06.432106 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.432117 | orchestrator | 2026-03-18 04:51:06.432128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:51:06.432139 | orchestrator | Wednesday 18 March 2026 04:51:05 +0000 (0:00:00.158) 0:07:37.594 ******* 2026-03-18 04:51:06.432149 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.432160 | orchestrator | 2026-03-18 04:51:06.432171 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:51:06.432182 | orchestrator | Wednesday 18 March 2026 04:51:06 +0000 (0:00:00.149) 0:07:37.744 ******* 2026-03-18 04:51:06.432192 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.432203 | orchestrator | 2026-03-18 04:51:06.432214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:51:06.432224 | orchestrator | Wednesday 18 March 2026 04:51:06 +0000 (0:00:00.141) 0:07:37.885 ******* 2026-03-18 04:51:06.432235 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:06.432246 | orchestrator | 2026-03-18 04:51:06.432269 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:51:52.228089 | orchestrator | Wednesday 18 March 2026 04:51:06 +0000 (0:00:00.147) 0:07:38.033 ******* 2026-03-18 04:51:52.228181 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:51:52.228190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:51:52.228197 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:51:52.228204 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228211 | orchestrator | 2026-03-18 04:51:52.228229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:51:52.228236 | orchestrator | Wednesday 18 March 2026 04:51:07 +0000 (0:00:01.074) 0:07:39.108 ******* 2026-03-18 04:51:52.228243 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:51:52.228257 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:51:52.228264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:51:52.228270 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228277 | orchestrator | 2026-03-18 04:51:52.228283 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:51:52.228290 | orchestrator | Wednesday 18 March 2026 04:51:07 +0000 (0:00:00.442) 0:07:39.550 ******* 2026-03-18 04:51:52.228296 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:51:52.228302 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:51:52.228309 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:51:52.228315 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228321 | orchestrator | 2026-03-18 04:51:52.228327 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:51:52.228334 | orchestrator | Wednesday 18 March 2026 04:51:08 +0000 (0:00:00.409) 0:07:39.960 ******* 2026-03-18 04:51:52.228340 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228346 | orchestrator | 2026-03-18 04:51:52.228352 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:51:52.228379 | orchestrator | Wednesday 18 March 2026 04:51:08 +0000 (0:00:00.149) 0:07:40.110 ******* 2026-03-18 04:51:52.228386 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-18 04:51:52.228392 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228398 | orchestrator | 2026-03-18 04:51:52.228404 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:51:52.228410 | orchestrator | Wednesday 18 March 2026 04:51:08 +0000 (0:00:00.355) 0:07:40.465 ******* 2026-03-18 04:51:52.228417 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228423 | orchestrator | 2026-03-18 04:51:52.228429 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:51:52.228436 | orchestrator | Wednesday 18 March 2026 04:51:09 +0000 (0:00:00.799) 0:07:41.264 ******* 2026-03-18 04:51:52.228442 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228448 | orchestrator | 2026-03-18 04:51:52.228466 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-18 04:51:52.228473 | orchestrator | Wednesday 18 March 2026 04:51:09 +0000 (0:00:00.157) 0:07:41.422 ******* 2026-03-18 04:51:52.228479 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-18 04:51:52.228486 | orchestrator | 2026-03-18 04:51:52.228492 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-18 04:51:52.228499 | orchestrator | Wednesday 18 March 2026 04:51:10 +0000 (0:00:00.268) 0:07:41.691 ******* 2026-03-18 04:51:52.228505 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228511 | orchestrator | 2026-03-18 04:51:52.228517 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-18 04:51:52.228564 | orchestrator | Wednesday 18 March 2026 04:51:12 +0000 (0:00:02.201) 0:07:43.892 ******* 2026-03-18 04:51:52.228571 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.228577 | orchestrator | 2026-03-18 04:51:52.228584 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-18 04:51:52.228590 | orchestrator | Wednesday 18 March 2026 04:51:12 +0000 (0:00:00.181) 0:07:44.074 ******* 2026-03-18 04:51:52.228597 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228603 | orchestrator | 2026-03-18 04:51:52.228609 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-18 04:51:52.228615 | orchestrator | Wednesday 18 March 2026 04:51:12 +0000 (0:00:00.461) 0:07:44.536 ******* 2026-03-18 04:51:52.228621 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228628 | orchestrator | 2026-03-18 04:51:52.228634 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-18 04:51:52.228640 | orchestrator | Wednesday 18 March 2026 04:51:13 +0000 (0:00:00.193) 0:07:44.729 ******* 2026-03-18 04:51:52.228647 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:51:52.228654 | orchestrator | 2026-03-18 04:51:52.228661 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-18 04:51:52.228668 | orchestrator | Wednesday 18 March 2026 04:51:14 +0000 (0:00:01.024) 0:07:45.754 ******* 2026-03-18 04:51:52.228675 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228682 | orchestrator | 2026-03-18 04:51:52.228689 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-18 04:51:52.228697 | orchestrator | Wednesday 18 March 2026 04:51:14 +0000 (0:00:00.594) 0:07:46.348 ******* 2026-03-18 04:51:52.228704 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228711 | orchestrator | 2026-03-18 04:51:52.228718 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-18 04:51:52.228725 | orchestrator | Wednesday 18 March 2026 04:51:15 +0000 (0:00:00.462) 0:07:46.811 ******* 2026-03-18 04:51:52.228732 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228739 | orchestrator | 2026-03-18 04:51:52.228746 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-18 04:51:52.228753 | orchestrator | Wednesday 18 March 2026 04:51:15 +0000 (0:00:00.507) 0:07:47.318 ******* 2026-03-18 04:51:52.228760 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:51:52.228772 | orchestrator | 2026-03-18 04:51:52.228779 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-18 04:51:52.228798 | orchestrator | Wednesday 18 March 2026 04:51:16 +0000 (0:00:00.587) 0:07:47.905 ******* 2026-03-18 04:51:52.228806 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:51:52.228813 | orchestrator | 2026-03-18 04:51:52.228820 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-18 04:51:52.228827 | orchestrator | Wednesday 18 March 2026 04:51:16 +0000 (0:00:00.601) 0:07:48.507 ******* 2026-03-18 04:51:52.228834 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:51:52.228841 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-18 04:51:52.228849 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-18 04:51:52.228856 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-18 04:51:52.228863 | orchestrator | 2026-03-18 04:51:52.228871 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-18 04:51:52.228878 | orchestrator | Wednesday 18 March 2026 04:51:19 +0000 (0:00:02.840) 0:07:51.347 ******* 2026-03-18 04:51:52.228885 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:51:52.228892 | orchestrator | 2026-03-18 04:51:52.228899 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-18 04:51:52.228906 | orchestrator | Wednesday 18 March 2026 04:51:20 +0000 (0:00:01.023) 0:07:52.371 ******* 2026-03-18 04:51:52.228913 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228920 | orchestrator | 2026-03-18 04:51:52.228927 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-18 04:51:52.228934 | orchestrator | Wednesday 18 March 2026 04:51:20 +0000 (0:00:00.150) 0:07:52.522 ******* 2026-03-18 04:51:52.228942 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228948 | orchestrator | 2026-03-18 04:51:52.228955 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-18 04:51:52.228963 | orchestrator | Wednesday 18 March 2026 04:51:21 +0000 (0:00:00.174) 0:07:52.697 ******* 2026-03-18 04:51:52.228970 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.228976 | orchestrator | 2026-03-18 04:51:52.228984 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-18 04:51:52.228991 | orchestrator | Wednesday 18 March 2026 04:51:22 +0000 (0:00:01.103) 0:07:53.801 ******* 2026-03-18 04:51:52.228998 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.229005 | orchestrator | 2026-03-18 04:51:52.229012 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-18 04:51:52.229018 | orchestrator | Wednesday 18 March 2026 04:51:22 +0000 (0:00:00.765) 0:07:54.566 ******* 2026-03-18 04:51:52.229024 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.229030 | orchestrator | 2026-03-18 04:51:52.229036 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-18 04:51:52.229043 | orchestrator | Wednesday 18 March 2026 04:51:23 +0000 (0:00:00.201) 0:07:54.768 ******* 2026-03-18 04:51:52.229049 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-18 04:51:52.229055 | orchestrator | 2026-03-18 04:51:52.229065 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-18 04:51:52.229071 | orchestrator | Wednesday 18 March 2026 04:51:23 +0000 (0:00:00.213) 0:07:54.981 ******* 2026-03-18 04:51:52.229077 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.229083 | orchestrator | 2026-03-18 04:51:52.229089 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-18 04:51:52.229096 | orchestrator | Wednesday 18 March 2026 04:51:23 +0000 (0:00:00.143) 0:07:55.125 ******* 2026-03-18 04:51:52.229102 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:51:52.229108 | orchestrator | 2026-03-18 04:51:52.229114 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-18 04:51:52.229120 | orchestrator | Wednesday 18 March 2026 04:51:23 +0000 (0:00:00.145) 0:07:55.271 ******* 2026-03-18 04:51:52.229131 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-18 04:51:52.229137 | orchestrator | 2026-03-18 04:51:52.229143 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-18 04:51:52.229149 | orchestrator | Wednesday 18 March 2026 04:51:23 +0000 (0:00:00.209) 0:07:55.480 ******* 2026-03-18 04:51:52.229156 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.229162 | orchestrator | 2026-03-18 04:51:52.229168 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-18 04:51:52.229174 | orchestrator | Wednesday 18 March 2026 04:51:25 +0000 (0:00:01.338) 0:07:56.818 ******* 2026-03-18 04:51:52.229181 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.229187 | orchestrator | 2026-03-18 04:51:52.229193 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-18 04:51:52.229199 | orchestrator | Wednesday 18 March 2026 04:51:26 +0000 (0:00:00.946) 0:07:57.765 ******* 2026-03-18 04:51:52.229205 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.229212 | orchestrator | 2026-03-18 04:51:52.229218 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-18 04:51:52.229224 | orchestrator | Wednesday 18 March 2026 04:51:27 +0000 (0:00:01.383) 0:07:59.149 ******* 2026-03-18 04:51:52.229230 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:51:52.229236 | orchestrator | 2026-03-18 04:51:52.229242 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-18 04:51:52.229249 | orchestrator | Wednesday 18 March 2026 04:51:29 +0000 (0:00:02.262) 0:08:01.412 ******* 2026-03-18 04:51:52.229255 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-18 04:51:52.229261 | orchestrator | 2026-03-18 04:51:52.229267 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-18 04:51:52.229273 | orchestrator | Wednesday 18 March 2026 04:51:30 +0000 (0:00:00.495) 0:08:01.908 ******* 2026-03-18 04:51:52.229279 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-18 04:51:52.229286 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:51:52.229292 | orchestrator | 2026-03-18 04:51:52.229298 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-18 04:51:52.229308 | orchestrator | Wednesday 18 March 2026 04:51:52 +0000 (0:00:21.921) 0:08:23.830 ******* 2026-03-18 04:52:12.803308 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:52:12.803425 | orchestrator | 2026-03-18 04:52:12.803442 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-18 04:52:12.803456 | orchestrator | Wednesday 18 March 2026 04:51:54 +0000 (0:00:02.014) 0:08:25.844 ******* 2026-03-18 04:52:12.803469 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:12.803483 | orchestrator | 2026-03-18 04:52:12.803495 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-18 04:52:12.803507 | orchestrator | Wednesday 18 March 2026 04:51:54 +0000 (0:00:00.144) 0:08:25.989 ******* 2026-03-18 04:52:12.803522 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:52:12.803537 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-18 04:52:12.803602 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-18 04:52:12.803639 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-18 04:52:12.803666 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-18 04:52:12.803678 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2a587efd68a9b7e6513fb76c30b9e3b44df6aa0f'}])  2026-03-18 04:52:12.803691 | orchestrator | 2026-03-18 04:52:12.803702 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-18 04:52:12.803713 | orchestrator | Wednesday 18 March 2026 04:52:03 +0000 (0:00:08.979) 0:08:34.969 ******* 2026-03-18 04:52:12.803724 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:52:12.803735 | orchestrator | 2026-03-18 04:52:12.803746 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:52:12.803756 | orchestrator | Wednesday 18 March 2026 04:52:04 +0000 (0:00:01.507) 0:08:36.477 ******* 2026-03-18 04:52:12.803767 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:52:12.803779 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-18 04:52:12.803789 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-18 04:52:12.803800 | orchestrator | 2026-03-18 04:52:12.803811 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:52:12.803822 | orchestrator | Wednesday 18 March 2026 04:52:06 +0000 (0:00:01.193) 0:08:37.670 ******* 2026-03-18 04:52:12.803832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:52:12.803846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:52:12.803858 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:52:12.803870 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:12.803882 | orchestrator | 2026-03-18 04:52:12.803895 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-18 04:52:12.803907 | orchestrator | Wednesday 18 March 2026 04:52:06 +0000 (0:00:00.469) 0:08:38.140 ******* 2026-03-18 04:52:12.803920 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:12.803932 | orchestrator | 2026-03-18 04:52:12.803944 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-18 04:52:12.803971 | orchestrator | Wednesday 18 March 2026 04:52:06 +0000 (0:00:00.170) 0:08:38.310 ******* 2026-03-18 04:52:12.803983 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:52:12.803994 | orchestrator | 2026-03-18 04:52:12.804005 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-18 04:52:12.804016 | orchestrator | 2026-03-18 04:52:12.804026 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-18 04:52:12.804037 | orchestrator | Wednesday 18 March 2026 04:52:08 +0000 (0:00:02.175) 0:08:40.486 ******* 2026-03-18 04:52:12.804048 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:12.804067 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:52:12.804078 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:52:12.804089 | orchestrator | 2026-03-18 04:52:12.804100 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-18 04:52:12.804111 | orchestrator | 2026-03-18 04:52:12.804122 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:52:12.804133 | orchestrator | Wednesday 18 March 2026 04:52:09 +0000 (0:00:00.800) 0:08:41.286 ******* 2026-03-18 04:52:12.804144 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804155 | orchestrator | 2026-03-18 04:52:12.804165 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:52:12.804176 | orchestrator | Wednesday 18 March 2026 04:52:09 +0000 (0:00:00.230) 0:08:41.517 ******* 2026-03-18 04:52:12.804187 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804198 | orchestrator | 2026-03-18 04:52:12.804209 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:52:12.804219 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.203) 0:08:41.720 ******* 2026-03-18 04:52:12.804230 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804241 | orchestrator | 2026-03-18 04:52:12.804252 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:52:12.804263 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.141) 0:08:41.861 ******* 2026-03-18 04:52:12.804274 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804285 | orchestrator | 2026-03-18 04:52:12.804295 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:52:12.804306 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.162) 0:08:42.023 ******* 2026-03-18 04:52:12.804317 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804328 | orchestrator | 2026-03-18 04:52:12.804338 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:52:12.804349 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.146) 0:08:42.170 ******* 2026-03-18 04:52:12.804360 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804371 | orchestrator | 2026-03-18 04:52:12.804382 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:52:12.804393 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.137) 0:08:42.307 ******* 2026-03-18 04:52:12.804404 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804414 | orchestrator | 2026-03-18 04:52:12.804425 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:52:12.804441 | orchestrator | Wednesday 18 March 2026 04:52:10 +0000 (0:00:00.143) 0:08:42.451 ******* 2026-03-18 04:52:12.804452 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804463 | orchestrator | 2026-03-18 04:52:12.804474 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:52:12.804484 | orchestrator | Wednesday 18 March 2026 04:52:11 +0000 (0:00:00.419) 0:08:42.870 ******* 2026-03-18 04:52:12.804495 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804506 | orchestrator | 2026-03-18 04:52:12.804516 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:52:12.804527 | orchestrator | Wednesday 18 March 2026 04:52:11 +0000 (0:00:00.148) 0:08:43.019 ******* 2026-03-18 04:52:12.804538 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804567 | orchestrator | 2026-03-18 04:52:12.804578 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:52:12.804590 | orchestrator | Wednesday 18 March 2026 04:52:11 +0000 (0:00:00.140) 0:08:43.159 ******* 2026-03-18 04:52:12.804600 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804611 | orchestrator | 2026-03-18 04:52:12.804622 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:52:12.804633 | orchestrator | Wednesday 18 March 2026 04:52:11 +0000 (0:00:00.152) 0:08:43.312 ******* 2026-03-18 04:52:12.804644 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804661 | orchestrator | 2026-03-18 04:52:12.804672 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:52:12.804683 | orchestrator | Wednesday 18 March 2026 04:52:11 +0000 (0:00:00.223) 0:08:43.536 ******* 2026-03-18 04:52:12.804694 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804704 | orchestrator | 2026-03-18 04:52:12.804715 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:52:12.804726 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.153) 0:08:43.689 ******* 2026-03-18 04:52:12.804737 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804748 | orchestrator | 2026-03-18 04:52:12.804759 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:52:12.804770 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.141) 0:08:43.831 ******* 2026-03-18 04:52:12.804780 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804791 | orchestrator | 2026-03-18 04:52:12.804802 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:52:12.804813 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.144) 0:08:43.976 ******* 2026-03-18 04:52:12.804824 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804834 | orchestrator | 2026-03-18 04:52:12.804845 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:52:12.804856 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.142) 0:08:44.119 ******* 2026-03-18 04:52:12.804867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804877 | orchestrator | 2026-03-18 04:52:12.804888 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:52:12.804899 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.148) 0:08:44.267 ******* 2026-03-18 04:52:12.804910 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:12.804921 | orchestrator | 2026-03-18 04:52:12.804938 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:52:20.653293 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.141) 0:08:44.409 ******* 2026-03-18 04:52:20.653403 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653418 | orchestrator | 2026-03-18 04:52:20.653429 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:52:20.653440 | orchestrator | Wednesday 18 March 2026 04:52:12 +0000 (0:00:00.149) 0:08:44.558 ******* 2026-03-18 04:52:20.653450 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653459 | orchestrator | 2026-03-18 04:52:20.653469 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:52:20.653479 | orchestrator | Wednesday 18 March 2026 04:52:13 +0000 (0:00:00.485) 0:08:45.043 ******* 2026-03-18 04:52:20.653488 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653498 | orchestrator | 2026-03-18 04:52:20.653508 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:52:20.653517 | orchestrator | Wednesday 18 March 2026 04:52:13 +0000 (0:00:00.146) 0:08:45.190 ******* 2026-03-18 04:52:20.653527 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653538 | orchestrator | 2026-03-18 04:52:20.653633 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:52:20.653650 | orchestrator | Wednesday 18 March 2026 04:52:13 +0000 (0:00:00.145) 0:08:45.335 ******* 2026-03-18 04:52:20.653665 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653681 | orchestrator | 2026-03-18 04:52:20.653698 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:52:20.653714 | orchestrator | Wednesday 18 March 2026 04:52:13 +0000 (0:00:00.169) 0:08:45.505 ******* 2026-03-18 04:52:20.653729 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653744 | orchestrator | 2026-03-18 04:52:20.653754 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:52:20.653764 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.239) 0:08:45.745 ******* 2026-03-18 04:52:20.653774 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653807 | orchestrator | 2026-03-18 04:52:20.653817 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:52:20.653827 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.171) 0:08:45.916 ******* 2026-03-18 04:52:20.653839 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653850 | orchestrator | 2026-03-18 04:52:20.653861 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:52:20.653872 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.151) 0:08:46.067 ******* 2026-03-18 04:52:20.653882 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653893 | orchestrator | 2026-03-18 04:52:20.653903 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:52:20.653928 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.144) 0:08:46.212 ******* 2026-03-18 04:52:20.653939 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653950 | orchestrator | 2026-03-18 04:52:20.653962 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:52:20.653972 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.127) 0:08:46.339 ******* 2026-03-18 04:52:20.653983 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.653994 | orchestrator | 2026-03-18 04:52:20.654004 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:52:20.654067 | orchestrator | Wednesday 18 March 2026 04:52:14 +0000 (0:00:00.136) 0:08:46.476 ******* 2026-03-18 04:52:20.654080 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654091 | orchestrator | 2026-03-18 04:52:20.654102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:52:20.654113 | orchestrator | Wednesday 18 March 2026 04:52:15 +0000 (0:00:00.160) 0:08:46.637 ******* 2026-03-18 04:52:20.654124 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654135 | orchestrator | 2026-03-18 04:52:20.654146 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:52:20.654157 | orchestrator | Wednesday 18 March 2026 04:52:15 +0000 (0:00:00.156) 0:08:46.793 ******* 2026-03-18 04:52:20.654167 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654177 | orchestrator | 2026-03-18 04:52:20.654187 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:52:20.654196 | orchestrator | Wednesday 18 March 2026 04:52:15 +0000 (0:00:00.498) 0:08:47.292 ******* 2026-03-18 04:52:20.654206 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654215 | orchestrator | 2026-03-18 04:52:20.654225 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:52:20.654234 | orchestrator | Wednesday 18 March 2026 04:52:15 +0000 (0:00:00.141) 0:08:47.434 ******* 2026-03-18 04:52:20.654244 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654253 | orchestrator | 2026-03-18 04:52:20.654263 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:52:20.654272 | orchestrator | Wednesday 18 March 2026 04:52:15 +0000 (0:00:00.136) 0:08:47.571 ******* 2026-03-18 04:52:20.654282 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654291 | orchestrator | 2026-03-18 04:52:20.654301 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:52:20.654310 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.170) 0:08:47.741 ******* 2026-03-18 04:52:20.654320 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654329 | orchestrator | 2026-03-18 04:52:20.654339 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:52:20.654348 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.146) 0:08:47.888 ******* 2026-03-18 04:52:20.654358 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654367 | orchestrator | 2026-03-18 04:52:20.654377 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:52:20.654386 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.151) 0:08:48.040 ******* 2026-03-18 04:52:20.654404 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654413 | orchestrator | 2026-03-18 04:52:20.654423 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:52:20.654453 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.131) 0:08:48.171 ******* 2026-03-18 04:52:20.654463 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654473 | orchestrator | 2026-03-18 04:52:20.654483 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:52:20.654494 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.153) 0:08:48.325 ******* 2026-03-18 04:52:20.654503 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654513 | orchestrator | 2026-03-18 04:52:20.654522 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:52:20.654532 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.145) 0:08:48.471 ******* 2026-03-18 04:52:20.654541 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654582 | orchestrator | 2026-03-18 04:52:20.654593 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:52:20.654603 | orchestrator | Wednesday 18 March 2026 04:52:16 +0000 (0:00:00.130) 0:08:48.601 ******* 2026-03-18 04:52:20.654612 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654622 | orchestrator | 2026-03-18 04:52:20.654631 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:52:20.654641 | orchestrator | Wednesday 18 March 2026 04:52:17 +0000 (0:00:00.138) 0:08:48.740 ******* 2026-03-18 04:52:20.654650 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654660 | orchestrator | 2026-03-18 04:52:20.654669 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:52:20.654679 | orchestrator | Wednesday 18 March 2026 04:52:17 +0000 (0:00:00.154) 0:08:48.894 ******* 2026-03-18 04:52:20.654688 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654697 | orchestrator | 2026-03-18 04:52:20.654707 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:52:20.654716 | orchestrator | Wednesday 18 March 2026 04:52:17 +0000 (0:00:00.134) 0:08:49.029 ******* 2026-03-18 04:52:20.654726 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654735 | orchestrator | 2026-03-18 04:52:20.654745 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:52:20.654754 | orchestrator | Wednesday 18 March 2026 04:52:17 +0000 (0:00:00.444) 0:08:49.473 ******* 2026-03-18 04:52:20.654763 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654773 | orchestrator | 2026-03-18 04:52:20.654782 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:52:20.654792 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.256) 0:08:49.730 ******* 2026-03-18 04:52:20.654801 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654811 | orchestrator | 2026-03-18 04:52:20.654820 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:52:20.654835 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.170) 0:08:49.900 ******* 2026-03-18 04:52:20.654845 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654855 | orchestrator | 2026-03-18 04:52:20.654864 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:52:20.654873 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.262) 0:08:50.163 ******* 2026-03-18 04:52:20.654883 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654892 | orchestrator | 2026-03-18 04:52:20.654902 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:52:20.654911 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.155) 0:08:50.318 ******* 2026-03-18 04:52:20.654920 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654930 | orchestrator | 2026-03-18 04:52:20.654940 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:52:20.654957 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.143) 0:08:50.461 ******* 2026-03-18 04:52:20.654967 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.654976 | orchestrator | 2026-03-18 04:52:20.654986 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:52:20.654995 | orchestrator | Wednesday 18 March 2026 04:52:18 +0000 (0:00:00.150) 0:08:50.611 ******* 2026-03-18 04:52:20.655004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.655014 | orchestrator | 2026-03-18 04:52:20.655024 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:52:20.655033 | orchestrator | Wednesday 18 March 2026 04:52:19 +0000 (0:00:00.148) 0:08:50.759 ******* 2026-03-18 04:52:20.655042 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.655052 | orchestrator | 2026-03-18 04:52:20.655061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:52:20.655071 | orchestrator | Wednesday 18 March 2026 04:52:19 +0000 (0:00:00.158) 0:08:50.918 ******* 2026-03-18 04:52:20.655080 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.655090 | orchestrator | 2026-03-18 04:52:20.655099 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:52:20.655109 | orchestrator | Wednesday 18 March 2026 04:52:19 +0000 (0:00:00.155) 0:08:51.073 ******* 2026-03-18 04:52:20.655118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:52:20.655128 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:52:20.655138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:52:20.655147 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.655157 | orchestrator | 2026-03-18 04:52:20.655166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:52:20.655176 | orchestrator | Wednesday 18 March 2026 04:52:19 +0000 (0:00:00.425) 0:08:51.498 ******* 2026-03-18 04:52:20.655185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:52:20.655195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:52:20.655205 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:52:20.655214 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:20.655224 | orchestrator | 2026-03-18 04:52:20.655233 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:52:20.655249 | orchestrator | Wednesday 18 March 2026 04:52:20 +0000 (0:00:00.757) 0:08:52.256 ******* 2026-03-18 04:52:29.613967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:52:29.614139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:52:29.614158 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:52:29.614171 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614183 | orchestrator | 2026-03-18 04:52:29.614196 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:52:29.614208 | orchestrator | Wednesday 18 March 2026 04:52:21 +0000 (0:00:00.755) 0:08:53.012 ******* 2026-03-18 04:52:29.614219 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614230 | orchestrator | 2026-03-18 04:52:29.614241 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:52:29.614252 | orchestrator | Wednesday 18 March 2026 04:52:21 +0000 (0:00:00.423) 0:08:53.435 ******* 2026-03-18 04:52:29.614264 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-18 04:52:29.614275 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614285 | orchestrator | 2026-03-18 04:52:29.614296 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:52:29.614307 | orchestrator | Wednesday 18 March 2026 04:52:22 +0000 (0:00:00.348) 0:08:53.784 ******* 2026-03-18 04:52:29.614318 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614358 | orchestrator | 2026-03-18 04:52:29.614371 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:52:29.614405 | orchestrator | Wednesday 18 March 2026 04:52:22 +0000 (0:00:00.232) 0:08:54.016 ******* 2026-03-18 04:52:29.614416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:52:29.614427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:52:29.614438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:52:29.614449 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614460 | orchestrator | 2026-03-18 04:52:29.614471 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:52:29.614482 | orchestrator | Wednesday 18 March 2026 04:52:22 +0000 (0:00:00.425) 0:08:54.442 ******* 2026-03-18 04:52:29.614494 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614506 | orchestrator | 2026-03-18 04:52:29.614519 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:52:29.614531 | orchestrator | Wednesday 18 March 2026 04:52:22 +0000 (0:00:00.157) 0:08:54.599 ******* 2026-03-18 04:52:29.614543 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614581 | orchestrator | 2026-03-18 04:52:29.614595 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:52:29.614621 | orchestrator | Wednesday 18 March 2026 04:52:23 +0000 (0:00:00.149) 0:08:54.748 ******* 2026-03-18 04:52:29.614635 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614648 | orchestrator | 2026-03-18 04:52:29.614661 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:52:29.614674 | orchestrator | Wednesday 18 March 2026 04:52:23 +0000 (0:00:00.136) 0:08:54.884 ******* 2026-03-18 04:52:29.614686 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:29.614699 | orchestrator | 2026-03-18 04:52:29.614711 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-18 04:52:29.614724 | orchestrator | 2026-03-18 04:52:29.614737 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:52:29.614750 | orchestrator | Wednesday 18 March 2026 04:52:23 +0000 (0:00:00.622) 0:08:55.507 ******* 2026-03-18 04:52:29.614763 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.614775 | orchestrator | 2026-03-18 04:52:29.614788 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:52:29.614801 | orchestrator | Wednesday 18 March 2026 04:52:24 +0000 (0:00:00.232) 0:08:55.740 ******* 2026-03-18 04:52:29.614814 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.614826 | orchestrator | 2026-03-18 04:52:29.614839 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:52:29.614852 | orchestrator | Wednesday 18 March 2026 04:52:24 +0000 (0:00:00.489) 0:08:56.229 ******* 2026-03-18 04:52:29.614865 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.614875 | orchestrator | 2026-03-18 04:52:29.614886 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:52:29.614897 | orchestrator | Wednesday 18 March 2026 04:52:24 +0000 (0:00:00.133) 0:08:56.363 ******* 2026-03-18 04:52:29.614908 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.614919 | orchestrator | 2026-03-18 04:52:29.614930 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:52:29.614941 | orchestrator | Wednesday 18 March 2026 04:52:24 +0000 (0:00:00.165) 0:08:56.528 ******* 2026-03-18 04:52:29.614952 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.614963 | orchestrator | 2026-03-18 04:52:29.614974 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:52:29.614986 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.154) 0:08:56.683 ******* 2026-03-18 04:52:29.614996 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615007 | orchestrator | 2026-03-18 04:52:29.615018 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:52:29.615029 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.150) 0:08:56.834 ******* 2026-03-18 04:52:29.615040 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615060 | orchestrator | 2026-03-18 04:52:29.615071 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:52:29.615082 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.162) 0:08:56.996 ******* 2026-03-18 04:52:29.615093 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615104 | orchestrator | 2026-03-18 04:52:29.615115 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:52:29.615126 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.164) 0:08:57.161 ******* 2026-03-18 04:52:29.615137 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615148 | orchestrator | 2026-03-18 04:52:29.615159 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:52:29.615187 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.137) 0:08:57.299 ******* 2026-03-18 04:52:29.615199 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615210 | orchestrator | 2026-03-18 04:52:29.615221 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:52:29.615232 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.164) 0:08:57.463 ******* 2026-03-18 04:52:29.615243 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615254 | orchestrator | 2026-03-18 04:52:29.615264 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:52:29.615275 | orchestrator | Wednesday 18 March 2026 04:52:25 +0000 (0:00:00.151) 0:08:57.614 ******* 2026-03-18 04:52:29.615286 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615297 | orchestrator | 2026-03-18 04:52:29.615308 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:52:29.615319 | orchestrator | Wednesday 18 March 2026 04:52:26 +0000 (0:00:00.224) 0:08:57.838 ******* 2026-03-18 04:52:29.615329 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615340 | orchestrator | 2026-03-18 04:52:29.615366 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:52:29.615378 | orchestrator | Wednesday 18 March 2026 04:52:26 +0000 (0:00:00.420) 0:08:58.258 ******* 2026-03-18 04:52:29.615389 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615411 | orchestrator | 2026-03-18 04:52:29.615422 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:52:29.615433 | orchestrator | Wednesday 18 March 2026 04:52:26 +0000 (0:00:00.141) 0:08:58.400 ******* 2026-03-18 04:52:29.615443 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615454 | orchestrator | 2026-03-18 04:52:29.615464 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:52:29.615475 | orchestrator | Wednesday 18 March 2026 04:52:26 +0000 (0:00:00.136) 0:08:58.536 ******* 2026-03-18 04:52:29.615486 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615497 | orchestrator | 2026-03-18 04:52:29.615507 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:52:29.615518 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.144) 0:08:58.680 ******* 2026-03-18 04:52:29.615528 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615539 | orchestrator | 2026-03-18 04:52:29.615549 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:52:29.615591 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.150) 0:08:58.831 ******* 2026-03-18 04:52:29.615603 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615614 | orchestrator | 2026-03-18 04:52:29.615625 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:52:29.615641 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.141) 0:08:58.973 ******* 2026-03-18 04:52:29.615652 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615663 | orchestrator | 2026-03-18 04:52:29.615674 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:52:29.615685 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.153) 0:08:59.126 ******* 2026-03-18 04:52:29.615704 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615714 | orchestrator | 2026-03-18 04:52:29.615725 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:52:29.615736 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.148) 0:08:59.275 ******* 2026-03-18 04:52:29.615746 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615757 | orchestrator | 2026-03-18 04:52:29.615768 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:52:29.615779 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.160) 0:08:59.436 ******* 2026-03-18 04:52:29.615789 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615800 | orchestrator | 2026-03-18 04:52:29.615810 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:52:29.615821 | orchestrator | Wednesday 18 March 2026 04:52:27 +0000 (0:00:00.149) 0:08:59.586 ******* 2026-03-18 04:52:29.615832 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615842 | orchestrator | 2026-03-18 04:52:29.615853 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:52:29.615864 | orchestrator | Wednesday 18 March 2026 04:52:28 +0000 (0:00:00.155) 0:08:59.741 ******* 2026-03-18 04:52:29.615874 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615885 | orchestrator | 2026-03-18 04:52:29.615895 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:52:29.615906 | orchestrator | Wednesday 18 March 2026 04:52:28 +0000 (0:00:00.218) 0:08:59.960 ******* 2026-03-18 04:52:29.615917 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615927 | orchestrator | 2026-03-18 04:52:29.615938 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:52:29.615949 | orchestrator | Wednesday 18 March 2026 04:52:28 +0000 (0:00:00.136) 0:09:00.097 ******* 2026-03-18 04:52:29.615959 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.615970 | orchestrator | 2026-03-18 04:52:29.615980 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:52:29.615991 | orchestrator | Wednesday 18 March 2026 04:52:28 +0000 (0:00:00.454) 0:09:00.551 ******* 2026-03-18 04:52:29.616002 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.616012 | orchestrator | 2026-03-18 04:52:29.616023 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:52:29.616034 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.169) 0:09:00.721 ******* 2026-03-18 04:52:29.616045 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.616055 | orchestrator | 2026-03-18 04:52:29.616066 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:52:29.616077 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.171) 0:09:00.893 ******* 2026-03-18 04:52:29.616087 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:29.616098 | orchestrator | 2026-03-18 04:52:29.616109 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:52:29.616119 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.188) 0:09:01.082 ******* 2026-03-18 04:52:29.616138 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.994972 | orchestrator | 2026-03-18 04:52:37.995044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:52:37.995051 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.143) 0:09:01.225 ******* 2026-03-18 04:52:37.995057 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995062 | orchestrator | 2026-03-18 04:52:37.995066 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:52:37.995070 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.148) 0:09:01.373 ******* 2026-03-18 04:52:37.995074 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995078 | orchestrator | 2026-03-18 04:52:37.995082 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:52:37.995086 | orchestrator | Wednesday 18 March 2026 04:52:29 +0000 (0:00:00.225) 0:09:01.599 ******* 2026-03-18 04:52:37.995104 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995108 | orchestrator | 2026-03-18 04:52:37.995112 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:52:37.995116 | orchestrator | Wednesday 18 March 2026 04:52:30 +0000 (0:00:00.130) 0:09:01.729 ******* 2026-03-18 04:52:37.995120 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995123 | orchestrator | 2026-03-18 04:52:37.995127 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:52:37.995131 | orchestrator | Wednesday 18 March 2026 04:52:30 +0000 (0:00:00.137) 0:09:01.867 ******* 2026-03-18 04:52:37.995135 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995138 | orchestrator | 2026-03-18 04:52:37.995142 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:52:37.995146 | orchestrator | Wednesday 18 March 2026 04:52:30 +0000 (0:00:00.145) 0:09:02.012 ******* 2026-03-18 04:52:37.995150 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995153 | orchestrator | 2026-03-18 04:52:37.995157 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:52:37.995161 | orchestrator | Wednesday 18 March 2026 04:52:30 +0000 (0:00:00.142) 0:09:02.155 ******* 2026-03-18 04:52:37.995164 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995168 | orchestrator | 2026-03-18 04:52:37.995172 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:52:37.995176 | orchestrator | Wednesday 18 March 2026 04:52:30 +0000 (0:00:00.137) 0:09:02.292 ******* 2026-03-18 04:52:37.995179 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995183 | orchestrator | 2026-03-18 04:52:37.995187 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:52:37.995191 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.418) 0:09:02.711 ******* 2026-03-18 04:52:37.995203 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995207 | orchestrator | 2026-03-18 04:52:37.995211 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:52:37.995216 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.140) 0:09:02.852 ******* 2026-03-18 04:52:37.995220 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995224 | orchestrator | 2026-03-18 04:52:37.995227 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:52:37.995231 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.144) 0:09:02.997 ******* 2026-03-18 04:52:37.995235 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995239 | orchestrator | 2026-03-18 04:52:37.995243 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:52:37.995246 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.163) 0:09:03.160 ******* 2026-03-18 04:52:37.995250 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995254 | orchestrator | 2026-03-18 04:52:37.995257 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:52:37.995261 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.151) 0:09:03.311 ******* 2026-03-18 04:52:37.995265 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995269 | orchestrator | 2026-03-18 04:52:37.995272 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:52:37.995276 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.144) 0:09:03.456 ******* 2026-03-18 04:52:37.995280 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995283 | orchestrator | 2026-03-18 04:52:37.995287 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:52:37.995291 | orchestrator | Wednesday 18 March 2026 04:52:31 +0000 (0:00:00.145) 0:09:03.601 ******* 2026-03-18 04:52:37.995294 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995298 | orchestrator | 2026-03-18 04:52:37.995302 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:52:37.995310 | orchestrator | Wednesday 18 March 2026 04:52:32 +0000 (0:00:00.142) 0:09:03.744 ******* 2026-03-18 04:52:37.995314 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995317 | orchestrator | 2026-03-18 04:52:37.995321 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:52:37.995325 | orchestrator | Wednesday 18 March 2026 04:52:32 +0000 (0:00:00.233) 0:09:03.977 ******* 2026-03-18 04:52:37.995329 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995332 | orchestrator | 2026-03-18 04:52:37.995336 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:52:37.995340 | orchestrator | Wednesday 18 March 2026 04:52:32 +0000 (0:00:00.136) 0:09:04.114 ******* 2026-03-18 04:52:37.995344 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995347 | orchestrator | 2026-03-18 04:52:37.995351 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:52:37.995355 | orchestrator | Wednesday 18 March 2026 04:52:32 +0000 (0:00:00.252) 0:09:04.367 ******* 2026-03-18 04:52:37.995358 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995362 | orchestrator | 2026-03-18 04:52:37.995366 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:52:37.995369 | orchestrator | Wednesday 18 March 2026 04:52:32 +0000 (0:00:00.149) 0:09:04.516 ******* 2026-03-18 04:52:37.995383 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995387 | orchestrator | 2026-03-18 04:52:37.995391 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:52:37.995396 | orchestrator | Wednesday 18 March 2026 04:52:33 +0000 (0:00:00.139) 0:09:04.655 ******* 2026-03-18 04:52:37.995399 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995403 | orchestrator | 2026-03-18 04:52:37.995407 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:52:37.995410 | orchestrator | Wednesday 18 March 2026 04:52:33 +0000 (0:00:00.451) 0:09:05.107 ******* 2026-03-18 04:52:37.995414 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995418 | orchestrator | 2026-03-18 04:52:37.995421 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:52:37.995425 | orchestrator | Wednesday 18 March 2026 04:52:33 +0000 (0:00:00.141) 0:09:05.248 ******* 2026-03-18 04:52:37.995429 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995432 | orchestrator | 2026-03-18 04:52:37.995436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:52:37.995440 | orchestrator | Wednesday 18 March 2026 04:52:33 +0000 (0:00:00.164) 0:09:05.413 ******* 2026-03-18 04:52:37.995444 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995447 | orchestrator | 2026-03-18 04:52:37.995451 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:52:37.995455 | orchestrator | Wednesday 18 March 2026 04:52:33 +0000 (0:00:00.149) 0:09:05.563 ******* 2026-03-18 04:52:37.995458 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:52:37.995463 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:52:37.995467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:52:37.995470 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995474 | orchestrator | 2026-03-18 04:52:37.995478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:52:37.995481 | orchestrator | Wednesday 18 March 2026 04:52:34 +0000 (0:00:00.423) 0:09:05.986 ******* 2026-03-18 04:52:37.995485 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:52:37.995489 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:52:37.995492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:52:37.995496 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995500 | orchestrator | 2026-03-18 04:52:37.995503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:52:37.995515 | orchestrator | Wednesday 18 March 2026 04:52:34 +0000 (0:00:00.399) 0:09:06.386 ******* 2026-03-18 04:52:37.995518 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:52:37.995522 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:52:37.995526 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:52:37.995530 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995533 | orchestrator | 2026-03-18 04:52:37.995537 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:52:37.995541 | orchestrator | Wednesday 18 March 2026 04:52:35 +0000 (0:00:00.410) 0:09:06.797 ******* 2026-03-18 04:52:37.995544 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995549 | orchestrator | 2026-03-18 04:52:37.995553 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:52:37.995558 | orchestrator | Wednesday 18 March 2026 04:52:35 +0000 (0:00:00.131) 0:09:06.928 ******* 2026-03-18 04:52:37.995605 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-18 04:52:37.995610 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995614 | orchestrator | 2026-03-18 04:52:37.995618 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:52:37.995623 | orchestrator | Wednesday 18 March 2026 04:52:35 +0000 (0:00:00.361) 0:09:07.290 ******* 2026-03-18 04:52:37.995627 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995631 | orchestrator | 2026-03-18 04:52:37.995635 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:52:37.995640 | orchestrator | Wednesday 18 March 2026 04:52:35 +0000 (0:00:00.203) 0:09:07.494 ******* 2026-03-18 04:52:37.995644 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:52:37.995648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:52:37.995652 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:52:37.995656 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995659 | orchestrator | 2026-03-18 04:52:37.995663 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:52:37.995667 | orchestrator | Wednesday 18 March 2026 04:52:36 +0000 (0:00:00.738) 0:09:08.233 ******* 2026-03-18 04:52:37.995671 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995674 | orchestrator | 2026-03-18 04:52:37.995678 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:52:37.995682 | orchestrator | Wednesday 18 March 2026 04:52:36 +0000 (0:00:00.141) 0:09:08.374 ******* 2026-03-18 04:52:37.995686 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995689 | orchestrator | 2026-03-18 04:52:37.995693 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:52:37.995697 | orchestrator | Wednesday 18 March 2026 04:52:37 +0000 (0:00:00.446) 0:09:08.821 ******* 2026-03-18 04:52:37.995700 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995704 | orchestrator | 2026-03-18 04:52:37.995708 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:52:37.995712 | orchestrator | Wednesday 18 March 2026 04:52:37 +0000 (0:00:00.156) 0:09:08.978 ******* 2026-03-18 04:52:37.995715 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:52:37.995719 | orchestrator | 2026-03-18 04:52:37.995723 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-18 04:52:37.995727 | orchestrator | 2026-03-18 04:52:37.995730 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:52:37.995737 | orchestrator | Wednesday 18 March 2026 04:52:37 +0000 (0:00:00.622) 0:09:09.601 ******* 2026-03-18 04:52:45.304542 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304707 | orchestrator | 2026-03-18 04:52:45.304725 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:52:45.304737 | orchestrator | Wednesday 18 March 2026 04:52:38 +0000 (0:00:00.222) 0:09:09.823 ******* 2026-03-18 04:52:45.304771 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304782 | orchestrator | 2026-03-18 04:52:45.304791 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:52:45.304800 | orchestrator | Wednesday 18 March 2026 04:52:38 +0000 (0:00:00.239) 0:09:10.063 ******* 2026-03-18 04:52:45.304809 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304819 | orchestrator | 2026-03-18 04:52:45.304829 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:52:45.304839 | orchestrator | Wednesday 18 March 2026 04:52:38 +0000 (0:00:00.145) 0:09:10.209 ******* 2026-03-18 04:52:45.304850 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304860 | orchestrator | 2026-03-18 04:52:45.304869 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:52:45.304878 | orchestrator | Wednesday 18 March 2026 04:52:38 +0000 (0:00:00.142) 0:09:10.352 ******* 2026-03-18 04:52:45.304887 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304896 | orchestrator | 2026-03-18 04:52:45.304905 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:52:45.304915 | orchestrator | Wednesday 18 March 2026 04:52:38 +0000 (0:00:00.149) 0:09:10.501 ******* 2026-03-18 04:52:45.304926 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304936 | orchestrator | 2026-03-18 04:52:45.304945 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:52:45.304954 | orchestrator | Wednesday 18 March 2026 04:52:39 +0000 (0:00:00.133) 0:09:10.635 ******* 2026-03-18 04:52:45.304963 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.304972 | orchestrator | 2026-03-18 04:52:45.304982 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:52:45.304992 | orchestrator | Wednesday 18 March 2026 04:52:39 +0000 (0:00:00.161) 0:09:10.797 ******* 2026-03-18 04:52:45.305001 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305011 | orchestrator | 2026-03-18 04:52:45.305021 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:52:45.305032 | orchestrator | Wednesday 18 March 2026 04:52:39 +0000 (0:00:00.416) 0:09:11.214 ******* 2026-03-18 04:52:45.305042 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305052 | orchestrator | 2026-03-18 04:52:45.305062 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:52:45.305085 | orchestrator | Wednesday 18 March 2026 04:52:39 +0000 (0:00:00.152) 0:09:11.367 ******* 2026-03-18 04:52:45.305096 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305105 | orchestrator | 2026-03-18 04:52:45.305115 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:52:45.305125 | orchestrator | Wednesday 18 March 2026 04:52:39 +0000 (0:00:00.139) 0:09:11.506 ******* 2026-03-18 04:52:45.305135 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305145 | orchestrator | 2026-03-18 04:52:45.305156 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:52:45.305167 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.156) 0:09:11.663 ******* 2026-03-18 04:52:45.305178 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305189 | orchestrator | 2026-03-18 04:52:45.305199 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:52:45.305214 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.215) 0:09:11.878 ******* 2026-03-18 04:52:45.305225 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305236 | orchestrator | 2026-03-18 04:52:45.305246 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:52:45.305257 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.141) 0:09:12.020 ******* 2026-03-18 04:52:45.305268 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305279 | orchestrator | 2026-03-18 04:52:45.305290 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:52:45.305301 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.151) 0:09:12.172 ******* 2026-03-18 04:52:45.305322 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305333 | orchestrator | 2026-03-18 04:52:45.305342 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:52:45.305352 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.144) 0:09:12.316 ******* 2026-03-18 04:52:45.305363 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305373 | orchestrator | 2026-03-18 04:52:45.305384 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:52:45.305395 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.148) 0:09:12.464 ******* 2026-03-18 04:52:45.305406 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305416 | orchestrator | 2026-03-18 04:52:45.305426 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:52:45.305436 | orchestrator | Wednesday 18 March 2026 04:52:40 +0000 (0:00:00.128) 0:09:12.593 ******* 2026-03-18 04:52:45.305444 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305453 | orchestrator | 2026-03-18 04:52:45.305461 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:52:45.305470 | orchestrator | Wednesday 18 March 2026 04:52:41 +0000 (0:00:00.162) 0:09:12.755 ******* 2026-03-18 04:52:45.305480 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305490 | orchestrator | 2026-03-18 04:52:45.305500 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:52:45.305511 | orchestrator | Wednesday 18 March 2026 04:52:41 +0000 (0:00:00.146) 0:09:12.902 ******* 2026-03-18 04:52:45.305522 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305532 | orchestrator | 2026-03-18 04:52:45.305542 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:52:45.305552 | orchestrator | Wednesday 18 March 2026 04:52:41 +0000 (0:00:00.450) 0:09:13.353 ******* 2026-03-18 04:52:45.305562 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305593 | orchestrator | 2026-03-18 04:52:45.305619 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:52:45.305630 | orchestrator | Wednesday 18 March 2026 04:52:41 +0000 (0:00:00.154) 0:09:13.507 ******* 2026-03-18 04:52:45.305639 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305648 | orchestrator | 2026-03-18 04:52:45.305658 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:52:45.305668 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.145) 0:09:13.652 ******* 2026-03-18 04:52:45.305678 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305688 | orchestrator | 2026-03-18 04:52:45.305698 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:52:45.305707 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.148) 0:09:13.800 ******* 2026-03-18 04:52:45.305716 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305725 | orchestrator | 2026-03-18 04:52:45.305733 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:52:45.305742 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.221) 0:09:14.022 ******* 2026-03-18 04:52:45.305751 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305760 | orchestrator | 2026-03-18 04:52:45.305770 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:52:45.305780 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.152) 0:09:14.175 ******* 2026-03-18 04:52:45.305789 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305799 | orchestrator | 2026-03-18 04:52:45.305809 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:52:45.305819 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.166) 0:09:14.341 ******* 2026-03-18 04:52:45.305827 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305837 | orchestrator | 2026-03-18 04:52:45.305846 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:52:45.305854 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.139) 0:09:14.480 ******* 2026-03-18 04:52:45.305870 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305879 | orchestrator | 2026-03-18 04:52:45.305887 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:52:45.305897 | orchestrator | Wednesday 18 March 2026 04:52:42 +0000 (0:00:00.135) 0:09:14.616 ******* 2026-03-18 04:52:45.305906 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305916 | orchestrator | 2026-03-18 04:52:45.305926 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:52:45.305942 | orchestrator | Wednesday 18 March 2026 04:52:43 +0000 (0:00:00.151) 0:09:14.767 ******* 2026-03-18 04:52:45.305952 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305962 | orchestrator | 2026-03-18 04:52:45.305971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:52:45.305980 | orchestrator | Wednesday 18 March 2026 04:52:43 +0000 (0:00:00.183) 0:09:14.950 ******* 2026-03-18 04:52:45.305989 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.305998 | orchestrator | 2026-03-18 04:52:45.306007 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:52:45.306067 | orchestrator | Wednesday 18 March 2026 04:52:43 +0000 (0:00:00.139) 0:09:15.090 ******* 2026-03-18 04:52:45.306079 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306089 | orchestrator | 2026-03-18 04:52:45.306097 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:52:45.306105 | orchestrator | Wednesday 18 March 2026 04:52:43 +0000 (0:00:00.513) 0:09:15.604 ******* 2026-03-18 04:52:45.306114 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306124 | orchestrator | 2026-03-18 04:52:45.306133 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:52:45.306142 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.144) 0:09:15.748 ******* 2026-03-18 04:52:45.306152 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306162 | orchestrator | 2026-03-18 04:52:45.306172 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:52:45.306182 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.156) 0:09:15.904 ******* 2026-03-18 04:52:45.306191 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306200 | orchestrator | 2026-03-18 04:52:45.306209 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:52:45.306217 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.153) 0:09:16.058 ******* 2026-03-18 04:52:45.306226 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306234 | orchestrator | 2026-03-18 04:52:45.306244 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:52:45.306254 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.146) 0:09:16.204 ******* 2026-03-18 04:52:45.306264 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306274 | orchestrator | 2026-03-18 04:52:45.306284 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:52:45.306293 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.147) 0:09:16.352 ******* 2026-03-18 04:52:45.306303 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306311 | orchestrator | 2026-03-18 04:52:45.306321 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:52:45.306329 | orchestrator | Wednesday 18 March 2026 04:52:44 +0000 (0:00:00.119) 0:09:16.471 ******* 2026-03-18 04:52:45.306338 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306346 | orchestrator | 2026-03-18 04:52:45.306356 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:52:45.306367 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.156) 0:09:16.627 ******* 2026-03-18 04:52:45.306377 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306387 | orchestrator | 2026-03-18 04:52:45.306397 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:52:45.306414 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.154) 0:09:16.782 ******* 2026-03-18 04:52:45.306423 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:45.306433 | orchestrator | 2026-03-18 04:52:45.306449 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:52:59.135644 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.131) 0:09:16.913 ******* 2026-03-18 04:52:59.135724 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135731 | orchestrator | 2026-03-18 04:52:59.135736 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:52:59.135741 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.135) 0:09:17.049 ******* 2026-03-18 04:52:59.135746 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135750 | orchestrator | 2026-03-18 04:52:59.135754 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:52:59.135758 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.132) 0:09:17.181 ******* 2026-03-18 04:52:59.135762 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135765 | orchestrator | 2026-03-18 04:52:59.135769 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:52:59.135773 | orchestrator | Wednesday 18 March 2026 04:52:45 +0000 (0:00:00.143) 0:09:17.325 ******* 2026-03-18 04:52:59.135777 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135780 | orchestrator | 2026-03-18 04:52:59.135784 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:52:59.135788 | orchestrator | Wednesday 18 March 2026 04:52:46 +0000 (0:00:00.411) 0:09:17.736 ******* 2026-03-18 04:52:59.135792 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135795 | orchestrator | 2026-03-18 04:52:59.135799 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:52:59.135803 | orchestrator | Wednesday 18 March 2026 04:52:46 +0000 (0:00:00.244) 0:09:17.981 ******* 2026-03-18 04:52:59.135807 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135810 | orchestrator | 2026-03-18 04:52:59.135814 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:52:59.135818 | orchestrator | Wednesday 18 March 2026 04:52:46 +0000 (0:00:00.147) 0:09:18.128 ******* 2026-03-18 04:52:59.135821 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135825 | orchestrator | 2026-03-18 04:52:59.135829 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:52:59.135833 | orchestrator | Wednesday 18 March 2026 04:52:46 +0000 (0:00:00.244) 0:09:18.372 ******* 2026-03-18 04:52:59.135836 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135840 | orchestrator | 2026-03-18 04:52:59.135844 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:52:59.135858 | orchestrator | Wednesday 18 March 2026 04:52:46 +0000 (0:00:00.138) 0:09:18.511 ******* 2026-03-18 04:52:59.135869 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135873 | orchestrator | 2026-03-18 04:52:59.135877 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:52:59.135881 | orchestrator | Wednesday 18 March 2026 04:52:47 +0000 (0:00:00.187) 0:09:18.698 ******* 2026-03-18 04:52:59.135885 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135889 | orchestrator | 2026-03-18 04:52:59.135893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:52:59.135896 | orchestrator | Wednesday 18 March 2026 04:52:47 +0000 (0:00:00.147) 0:09:18.846 ******* 2026-03-18 04:52:59.135906 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135910 | orchestrator | 2026-03-18 04:52:59.135914 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:52:59.135917 | orchestrator | Wednesday 18 March 2026 04:52:47 +0000 (0:00:00.149) 0:09:18.996 ******* 2026-03-18 04:52:59.135921 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135939 | orchestrator | 2026-03-18 04:52:59.135943 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:52:59.135947 | orchestrator | Wednesday 18 March 2026 04:52:47 +0000 (0:00:00.141) 0:09:19.137 ******* 2026-03-18 04:52:59.135950 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135954 | orchestrator | 2026-03-18 04:52:59.135958 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:52:59.135962 | orchestrator | Wednesday 18 March 2026 04:52:47 +0000 (0:00:00.134) 0:09:19.272 ******* 2026-03-18 04:52:59.135966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:52:59.135970 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:52:59.135974 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:52:59.135977 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.135981 | orchestrator | 2026-03-18 04:52:59.135985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:52:59.135988 | orchestrator | Wednesday 18 March 2026 04:52:48 +0000 (0:00:00.399) 0:09:19.671 ******* 2026-03-18 04:52:59.135992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:52:59.135996 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:52:59.136000 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:52:59.136003 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136007 | orchestrator | 2026-03-18 04:52:59.136011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:52:59.136015 | orchestrator | Wednesday 18 March 2026 04:52:48 +0000 (0:00:00.783) 0:09:20.454 ******* 2026-03-18 04:52:59.136018 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:52:59.136022 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:52:59.136025 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:52:59.136029 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136033 | orchestrator | 2026-03-18 04:52:59.136036 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:52:59.136040 | orchestrator | Wednesday 18 March 2026 04:52:49 +0000 (0:00:00.759) 0:09:21.214 ******* 2026-03-18 04:52:59.136044 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136048 | orchestrator | 2026-03-18 04:52:59.136051 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:52:59.136065 | orchestrator | Wednesday 18 March 2026 04:52:50 +0000 (0:00:00.436) 0:09:21.651 ******* 2026-03-18 04:52:59.136069 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-18 04:52:59.136073 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136076 | orchestrator | 2026-03-18 04:52:59.136080 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:52:59.136084 | orchestrator | Wednesday 18 March 2026 04:52:50 +0000 (0:00:00.336) 0:09:21.988 ******* 2026-03-18 04:52:59.136087 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136091 | orchestrator | 2026-03-18 04:52:59.136095 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:52:59.136099 | orchestrator | Wednesday 18 March 2026 04:52:50 +0000 (0:00:00.228) 0:09:22.216 ******* 2026-03-18 04:52:59.136102 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:52:59.136106 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:52:59.136110 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:52:59.136114 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136117 | orchestrator | 2026-03-18 04:52:59.136121 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:52:59.136125 | orchestrator | Wednesday 18 March 2026 04:52:51 +0000 (0:00:00.442) 0:09:22.659 ******* 2026-03-18 04:52:59.136128 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136136 | orchestrator | 2026-03-18 04:52:59.136140 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:52:59.136143 | orchestrator | Wednesday 18 March 2026 04:52:51 +0000 (0:00:00.164) 0:09:22.823 ******* 2026-03-18 04:52:59.136147 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136151 | orchestrator | 2026-03-18 04:52:59.136154 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:52:59.136158 | orchestrator | Wednesday 18 March 2026 04:52:51 +0000 (0:00:00.145) 0:09:22.969 ******* 2026-03-18 04:52:59.136162 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136166 | orchestrator | 2026-03-18 04:52:59.136169 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:52:59.136173 | orchestrator | Wednesday 18 March 2026 04:52:51 +0000 (0:00:00.137) 0:09:23.106 ******* 2026-03-18 04:52:59.136177 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:52:59.136180 | orchestrator | 2026-03-18 04:52:59.136185 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-18 04:52:59.136189 | orchestrator | 2026-03-18 04:52:59.136197 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:52:59.136201 | orchestrator | Wednesday 18 March 2026 04:52:52 +0000 (0:00:00.587) 0:09:23.694 ******* 2026-03-18 04:52:59.136205 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:52:59.136210 | orchestrator | 2026-03-18 04:52:59.136214 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-18 04:52:59.136218 | orchestrator | Wednesday 18 March 2026 04:52:54 +0000 (0:00:01.949) 0:09:25.643 ******* 2026-03-18 04:52:59.136223 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:52:59.136227 | orchestrator | 2026-03-18 04:52:59.136231 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:52:59.136235 | orchestrator | Wednesday 18 March 2026 04:52:55 +0000 (0:00:01.836) 0:09:27.479 ******* 2026-03-18 04:52:59.136239 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-18 04:52:59.136244 | orchestrator | 2026-03-18 04:52:59.136248 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:52:59.136252 | orchestrator | Wednesday 18 March 2026 04:52:56 +0000 (0:00:00.277) 0:09:27.757 ******* 2026-03-18 04:52:59.136256 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136261 | orchestrator | 2026-03-18 04:52:59.136265 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:52:59.136269 | orchestrator | Wednesday 18 March 2026 04:52:56 +0000 (0:00:00.470) 0:09:28.227 ******* 2026-03-18 04:52:59.136274 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136278 | orchestrator | 2026-03-18 04:52:59.136282 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:52:59.136286 | orchestrator | Wednesday 18 March 2026 04:52:56 +0000 (0:00:00.147) 0:09:28.375 ******* 2026-03-18 04:52:59.136291 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136295 | orchestrator | 2026-03-18 04:52:59.136299 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:52:59.136303 | orchestrator | Wednesday 18 March 2026 04:52:57 +0000 (0:00:00.581) 0:09:28.956 ******* 2026-03-18 04:52:59.136307 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136311 | orchestrator | 2026-03-18 04:52:59.136316 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:52:59.136320 | orchestrator | Wednesday 18 March 2026 04:52:57 +0000 (0:00:00.160) 0:09:29.116 ******* 2026-03-18 04:52:59.136324 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136328 | orchestrator | 2026-03-18 04:52:59.136333 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:52:59.136337 | orchestrator | Wednesday 18 March 2026 04:52:57 +0000 (0:00:00.149) 0:09:29.266 ******* 2026-03-18 04:52:59.136341 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136346 | orchestrator | 2026-03-18 04:52:59.136350 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:52:59.136411 | orchestrator | Wednesday 18 March 2026 04:52:57 +0000 (0:00:00.180) 0:09:29.447 ******* 2026-03-18 04:52:59.136417 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:52:59.136421 | orchestrator | 2026-03-18 04:52:59.136425 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:52:59.136430 | orchestrator | Wednesday 18 March 2026 04:52:57 +0000 (0:00:00.155) 0:09:29.603 ******* 2026-03-18 04:52:59.136434 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:52:59.136438 | orchestrator | 2026-03-18 04:52:59.136442 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:52:59.136446 | orchestrator | Wednesday 18 March 2026 04:52:58 +0000 (0:00:00.148) 0:09:29.751 ******* 2026-03-18 04:52:59.136450 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:52:59.136457 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:53:06.489181 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:53:06.489290 | orchestrator | 2026-03-18 04:53:06.489308 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:53:06.489330 | orchestrator | Wednesday 18 March 2026 04:52:59 +0000 (0:00:00.988) 0:09:30.740 ******* 2026-03-18 04:53:06.489349 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:06.489368 | orchestrator | 2026-03-18 04:53:06.489387 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:53:06.489406 | orchestrator | Wednesday 18 March 2026 04:52:59 +0000 (0:00:00.301) 0:09:31.042 ******* 2026-03-18 04:53:06.489424 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:53:06.489445 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:53:06.489464 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:53:06.489485 | orchestrator | 2026-03-18 04:53:06.489498 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:53:06.489509 | orchestrator | Wednesday 18 March 2026 04:53:01 +0000 (0:00:02.519) 0:09:33.561 ******* 2026-03-18 04:53:06.489520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:53:06.489531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:53:06.489542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:53:06.489553 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.489564 | orchestrator | 2026-03-18 04:53:06.489575 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:53:06.489621 | orchestrator | Wednesday 18 March 2026 04:53:02 +0000 (0:00:00.433) 0:09:33.994 ******* 2026-03-18 04:53:06.489635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489677 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489689 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.489700 | orchestrator | 2026-03-18 04:53:06.489714 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:53:06.489726 | orchestrator | Wednesday 18 March 2026 04:53:03 +0000 (0:00:00.642) 0:09:34.636 ******* 2026-03-18 04:53:06.489741 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489792 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:06.489805 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.489818 | orchestrator | 2026-03-18 04:53:06.489830 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:53:06.489843 | orchestrator | Wednesday 18 March 2026 04:53:03 +0000 (0:00:00.172) 0:09:34.809 ******* 2026-03-18 04:53:06.489901 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:53:00.323936', 'end': '2026-03-18 04:53:00.372091', 'delta': '0:00:00.048155', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:53:06.489930 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:53:00.910239', 'end': '2026-03-18 04:53:00.953982', 'delta': '0:00:00.043743', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:53:06.489951 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:53:01.471342', 'end': '2026-03-18 04:53:01.508485', 'delta': '0:00:00.037143', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:53:06.489964 | orchestrator | 2026-03-18 04:53:06.489977 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:53:06.489989 | orchestrator | Wednesday 18 March 2026 04:53:03 +0000 (0:00:00.209) 0:09:35.019 ******* 2026-03-18 04:53:06.490010 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:06.490084 | orchestrator | 2026-03-18 04:53:06.490097 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:53:06.490110 | orchestrator | Wednesday 18 March 2026 04:53:03 +0000 (0:00:00.281) 0:09:35.300 ******* 2026-03-18 04:53:06.490123 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490134 | orchestrator | 2026-03-18 04:53:06.490145 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:53:06.490155 | orchestrator | Wednesday 18 March 2026 04:53:03 +0000 (0:00:00.256) 0:09:35.557 ******* 2026-03-18 04:53:06.490179 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:06.490190 | orchestrator | 2026-03-18 04:53:06.490201 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:53:06.490212 | orchestrator | Wednesday 18 March 2026 04:53:04 +0000 (0:00:00.167) 0:09:35.724 ******* 2026-03-18 04:53:06.490222 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:06.490251 | orchestrator | 2026-03-18 04:53:06.490263 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:53:06.490273 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.964) 0:09:36.689 ******* 2026-03-18 04:53:06.490284 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:06.490305 | orchestrator | 2026-03-18 04:53:06.490321 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:53:06.490339 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.175) 0:09:36.865 ******* 2026-03-18 04:53:06.490359 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490377 | orchestrator | 2026-03-18 04:53:06.490395 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:53:06.490414 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.144) 0:09:37.009 ******* 2026-03-18 04:53:06.490434 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490452 | orchestrator | 2026-03-18 04:53:06.490471 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:53:06.490489 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.223) 0:09:37.233 ******* 2026-03-18 04:53:06.490507 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490525 | orchestrator | 2026-03-18 04:53:06.490542 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:53:06.490580 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.140) 0:09:37.374 ******* 2026-03-18 04:53:06.490626 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490645 | orchestrator | 2026-03-18 04:53:06.490663 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:53:06.490683 | orchestrator | Wednesday 18 March 2026 04:53:05 +0000 (0:00:00.140) 0:09:37.515 ******* 2026-03-18 04:53:06.490701 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490719 | orchestrator | 2026-03-18 04:53:06.490737 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:53:06.490755 | orchestrator | Wednesday 18 March 2026 04:53:06 +0000 (0:00:00.431) 0:09:37.946 ******* 2026-03-18 04:53:06.490773 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:06.490791 | orchestrator | 2026-03-18 04:53:06.490808 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:53:06.490846 | orchestrator | Wednesday 18 March 2026 04:53:06 +0000 (0:00:00.151) 0:09:38.098 ******* 2026-03-18 04:53:07.420546 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:07.420721 | orchestrator | 2026-03-18 04:53:07.420748 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:53:07.420766 | orchestrator | Wednesday 18 March 2026 04:53:06 +0000 (0:00:00.147) 0:09:38.245 ******* 2026-03-18 04:53:07.420785 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:07.420801 | orchestrator | 2026-03-18 04:53:07.420820 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:53:07.420840 | orchestrator | Wednesday 18 March 2026 04:53:06 +0000 (0:00:00.134) 0:09:38.380 ******* 2026-03-18 04:53:07.420889 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:07.420908 | orchestrator | 2026-03-18 04:53:07.420925 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:53:07.420942 | orchestrator | Wednesday 18 March 2026 04:53:06 +0000 (0:00:00.147) 0:09:38.527 ******* 2026-03-18 04:53:07.420963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.420983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:53:07.421034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:53:07.421158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:53:07.421194 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:07.421211 | orchestrator | 2026-03-18 04:53:07.421228 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:53:07.421246 | orchestrator | Wednesday 18 March 2026 04:53:07 +0000 (0:00:00.257) 0:09:38.785 ******* 2026-03-18 04:53:07.421266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:07.421286 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:07.421323 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632542 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632784 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632806 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632947 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:53:10.632967 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:10.632990 | orchestrator | 2026-03-18 04:53:10.633011 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:53:10.633033 | orchestrator | Wednesday 18 March 2026 04:53:07 +0000 (0:00:00.246) 0:09:39.031 ******* 2026-03-18 04:53:10.633053 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:10.633074 | orchestrator | 2026-03-18 04:53:10.633096 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:53:10.633115 | orchestrator | Wednesday 18 March 2026 04:53:07 +0000 (0:00:00.482) 0:09:39.513 ******* 2026-03-18 04:53:10.633135 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:10.633155 | orchestrator | 2026-03-18 04:53:10.633175 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:53:10.633197 | orchestrator | Wednesday 18 March 2026 04:53:08 +0000 (0:00:00.155) 0:09:39.669 ******* 2026-03-18 04:53:10.633216 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:10.633247 | orchestrator | 2026-03-18 04:53:10.633268 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:53:10.633288 | orchestrator | Wednesday 18 March 2026 04:53:08 +0000 (0:00:00.467) 0:09:40.137 ******* 2026-03-18 04:53:10.633307 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:10.633320 | orchestrator | 2026-03-18 04:53:10.633332 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:53:10.633344 | orchestrator | Wednesday 18 March 2026 04:53:08 +0000 (0:00:00.141) 0:09:40.278 ******* 2026-03-18 04:53:10.633358 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:10.633370 | orchestrator | 2026-03-18 04:53:10.633382 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:53:10.633394 | orchestrator | Wednesday 18 March 2026 04:53:08 +0000 (0:00:00.256) 0:09:40.535 ******* 2026-03-18 04:53:10.633406 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:10.633419 | orchestrator | 2026-03-18 04:53:10.633431 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:53:10.633441 | orchestrator | Wednesday 18 March 2026 04:53:09 +0000 (0:00:00.166) 0:09:40.701 ******* 2026-03-18 04:53:10.633452 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:53:10.633463 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 04:53:10.633474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 04:53:10.633485 | orchestrator | 2026-03-18 04:53:10.633495 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:53:10.633506 | orchestrator | Wednesday 18 March 2026 04:53:10 +0000 (0:00:01.330) 0:09:42.032 ******* 2026-03-18 04:53:10.633516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 04:53:10.633528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 04:53:10.633538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 04:53:10.633549 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:10.633583 | orchestrator | 2026-03-18 04:53:10.633662 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:53:20.974227 | orchestrator | Wednesday 18 March 2026 04:53:10 +0000 (0:00:00.205) 0:09:42.238 ******* 2026-03-18 04:53:20.974320 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974332 | orchestrator | 2026-03-18 04:53:20.974339 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:53:20.974347 | orchestrator | Wednesday 18 March 2026 04:53:10 +0000 (0:00:00.151) 0:09:42.390 ******* 2026-03-18 04:53:20.974354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:53:20.974361 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:53:20.974369 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:53:20.974375 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:53:20.974382 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:53:20.974389 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:53:20.974396 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:53:20.974403 | orchestrator | 2026-03-18 04:53:20.974422 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:53:20.974428 | orchestrator | Wednesday 18 March 2026 04:53:11 +0000 (0:00:00.849) 0:09:43.239 ******* 2026-03-18 04:53:20.974435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:53:20.974442 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:53:20.974449 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:53:20.974456 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:53:20.974478 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:53:20.974485 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:53:20.974492 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:53:20.974498 | orchestrator | 2026-03-18 04:53:20.974505 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:53:20.974512 | orchestrator | Wednesday 18 March 2026 04:53:13 +0000 (0:00:01.738) 0:09:44.978 ******* 2026-03-18 04:53:20.974518 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-18 04:53:20.974526 | orchestrator | 2026-03-18 04:53:20.974533 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:53:20.974539 | orchestrator | Wednesday 18 March 2026 04:53:13 +0000 (0:00:00.255) 0:09:45.234 ******* 2026-03-18 04:53:20.974546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-18 04:53:20.974553 | orchestrator | 2026-03-18 04:53:20.974559 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:53:20.974566 | orchestrator | Wednesday 18 March 2026 04:53:13 +0000 (0:00:00.232) 0:09:45.467 ******* 2026-03-18 04:53:20.974572 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.974644 | orchestrator | 2026-03-18 04:53:20.974651 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:53:20.974658 | orchestrator | Wednesday 18 March 2026 04:53:14 +0000 (0:00:00.583) 0:09:46.050 ******* 2026-03-18 04:53:20.974665 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974671 | orchestrator | 2026-03-18 04:53:20.974678 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:53:20.974685 | orchestrator | Wednesday 18 March 2026 04:53:14 +0000 (0:00:00.117) 0:09:46.168 ******* 2026-03-18 04:53:20.974691 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974698 | orchestrator | 2026-03-18 04:53:20.974704 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:53:20.974711 | orchestrator | Wednesday 18 March 2026 04:53:14 +0000 (0:00:00.139) 0:09:46.307 ******* 2026-03-18 04:53:20.974718 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974724 | orchestrator | 2026-03-18 04:53:20.974731 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:53:20.974738 | orchestrator | Wednesday 18 March 2026 04:53:15 +0000 (0:00:00.429) 0:09:46.737 ******* 2026-03-18 04:53:20.974744 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.974751 | orchestrator | 2026-03-18 04:53:20.974758 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:53:20.974764 | orchestrator | Wednesday 18 March 2026 04:53:15 +0000 (0:00:00.552) 0:09:47.290 ******* 2026-03-18 04:53:20.974771 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974778 | orchestrator | 2026-03-18 04:53:20.974784 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:53:20.974791 | orchestrator | Wednesday 18 March 2026 04:53:15 +0000 (0:00:00.142) 0:09:47.432 ******* 2026-03-18 04:53:20.974800 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974807 | orchestrator | 2026-03-18 04:53:20.974815 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:53:20.974823 | orchestrator | Wednesday 18 March 2026 04:53:15 +0000 (0:00:00.146) 0:09:47.579 ******* 2026-03-18 04:53:20.974831 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.974838 | orchestrator | 2026-03-18 04:53:20.974846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:53:20.974854 | orchestrator | Wednesday 18 March 2026 04:53:16 +0000 (0:00:00.603) 0:09:48.182 ******* 2026-03-18 04:53:20.974861 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.974869 | orchestrator | 2026-03-18 04:53:20.974877 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:53:20.974904 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.539) 0:09:48.722 ******* 2026-03-18 04:53:20.974912 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974920 | orchestrator | 2026-03-18 04:53:20.974927 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:53:20.974935 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.153) 0:09:48.875 ******* 2026-03-18 04:53:20.974943 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.974951 | orchestrator | 2026-03-18 04:53:20.974958 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:53:20.974966 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.168) 0:09:49.044 ******* 2026-03-18 04:53:20.974974 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.974982 | orchestrator | 2026-03-18 04:53:20.974989 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:53:20.974997 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.142) 0:09:49.187 ******* 2026-03-18 04:53:20.975004 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975012 | orchestrator | 2026-03-18 04:53:20.975020 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:53:20.975027 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.139) 0:09:49.327 ******* 2026-03-18 04:53:20.975035 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975043 | orchestrator | 2026-03-18 04:53:20.975055 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:53:20.975063 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.120) 0:09:49.447 ******* 2026-03-18 04:53:20.975071 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975078 | orchestrator | 2026-03-18 04:53:20.975086 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:53:20.975093 | orchestrator | Wednesday 18 March 2026 04:53:17 +0000 (0:00:00.146) 0:09:49.594 ******* 2026-03-18 04:53:20.975101 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975108 | orchestrator | 2026-03-18 04:53:20.975116 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:53:20.975123 | orchestrator | Wednesday 18 March 2026 04:53:18 +0000 (0:00:00.148) 0:09:49.742 ******* 2026-03-18 04:53:20.975131 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.975139 | orchestrator | 2026-03-18 04:53:20.975146 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:53:20.975154 | orchestrator | Wednesday 18 March 2026 04:53:18 +0000 (0:00:00.471) 0:09:50.214 ******* 2026-03-18 04:53:20.975161 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.975167 | orchestrator | 2026-03-18 04:53:20.975174 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:53:20.975180 | orchestrator | Wednesday 18 March 2026 04:53:18 +0000 (0:00:00.195) 0:09:50.410 ******* 2026-03-18 04:53:20.975187 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:20.975194 | orchestrator | 2026-03-18 04:53:20.975200 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:53:20.975207 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.244) 0:09:50.654 ******* 2026-03-18 04:53:20.975214 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975220 | orchestrator | 2026-03-18 04:53:20.975226 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:53:20.975233 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.166) 0:09:50.821 ******* 2026-03-18 04:53:20.975240 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975246 | orchestrator | 2026-03-18 04:53:20.975253 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:53:20.975259 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.144) 0:09:50.965 ******* 2026-03-18 04:53:20.975266 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975272 | orchestrator | 2026-03-18 04:53:20.975279 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:53:20.975291 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.137) 0:09:51.103 ******* 2026-03-18 04:53:20.975297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975304 | orchestrator | 2026-03-18 04:53:20.975310 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:53:20.975317 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.144) 0:09:51.248 ******* 2026-03-18 04:53:20.975324 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975330 | orchestrator | 2026-03-18 04:53:20.975337 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:53:20.975343 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.134) 0:09:51.382 ******* 2026-03-18 04:53:20.975350 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975356 | orchestrator | 2026-03-18 04:53:20.975363 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:53:20.975370 | orchestrator | Wednesday 18 March 2026 04:53:19 +0000 (0:00:00.132) 0:09:51.515 ******* 2026-03-18 04:53:20.975376 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975383 | orchestrator | 2026-03-18 04:53:20.975389 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:53:20.975396 | orchestrator | Wednesday 18 March 2026 04:53:20 +0000 (0:00:00.156) 0:09:51.672 ******* 2026-03-18 04:53:20.975403 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975409 | orchestrator | 2026-03-18 04:53:20.975416 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:53:20.975422 | orchestrator | Wednesday 18 March 2026 04:53:20 +0000 (0:00:00.162) 0:09:51.834 ******* 2026-03-18 04:53:20.975429 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975435 | orchestrator | 2026-03-18 04:53:20.975442 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:53:20.975448 | orchestrator | Wednesday 18 March 2026 04:53:20 +0000 (0:00:00.137) 0:09:51.972 ******* 2026-03-18 04:53:20.975455 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975462 | orchestrator | 2026-03-18 04:53:20.975468 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:53:20.975475 | orchestrator | Wednesday 18 March 2026 04:53:20 +0000 (0:00:00.452) 0:09:52.424 ******* 2026-03-18 04:53:20.975481 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:20.975488 | orchestrator | 2026-03-18 04:53:20.975499 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:53:38.355324 | orchestrator | Wednesday 18 March 2026 04:53:20 +0000 (0:00:00.154) 0:09:52.578 ******* 2026-03-18 04:53:38.355499 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.355517 | orchestrator | 2026-03-18 04:53:38.355530 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:53:38.355542 | orchestrator | Wednesday 18 March 2026 04:53:21 +0000 (0:00:00.222) 0:09:52.801 ******* 2026-03-18 04:53:38.355552 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.355564 | orchestrator | 2026-03-18 04:53:38.355575 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:53:38.355586 | orchestrator | Wednesday 18 March 2026 04:53:22 +0000 (0:00:00.994) 0:09:53.796 ******* 2026-03-18 04:53:38.355597 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.355669 | orchestrator | 2026-03-18 04:53:38.355685 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:53:38.355696 | orchestrator | Wednesday 18 March 2026 04:53:23 +0000 (0:00:01.466) 0:09:55.263 ******* 2026-03-18 04:53:38.355707 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-18 04:53:38.355718 | orchestrator | 2026-03-18 04:53:38.355729 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:53:38.355756 | orchestrator | Wednesday 18 March 2026 04:53:23 +0000 (0:00:00.211) 0:09:55.474 ******* 2026-03-18 04:53:38.355767 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.355778 | orchestrator | 2026-03-18 04:53:38.355812 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:53:38.355824 | orchestrator | Wednesday 18 March 2026 04:53:23 +0000 (0:00:00.142) 0:09:55.616 ******* 2026-03-18 04:53:38.355834 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.355845 | orchestrator | 2026-03-18 04:53:38.355856 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:53:38.355868 | orchestrator | Wednesday 18 March 2026 04:53:24 +0000 (0:00:00.171) 0:09:55.788 ******* 2026-03-18 04:53:38.355881 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:53:38.355893 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:53:38.355907 | orchestrator | 2026-03-18 04:53:38.355919 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:53:38.355931 | orchestrator | Wednesday 18 March 2026 04:53:25 +0000 (0:00:00.881) 0:09:56.670 ******* 2026-03-18 04:53:38.355943 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.355956 | orchestrator | 2026-03-18 04:53:38.355968 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:53:38.355980 | orchestrator | Wednesday 18 March 2026 04:53:25 +0000 (0:00:00.538) 0:09:57.208 ******* 2026-03-18 04:53:38.355993 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356004 | orchestrator | 2026-03-18 04:53:38.356016 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:53:38.356029 | orchestrator | Wednesday 18 March 2026 04:53:25 +0000 (0:00:00.172) 0:09:57.380 ******* 2026-03-18 04:53:38.356041 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356053 | orchestrator | 2026-03-18 04:53:38.356065 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:53:38.356077 | orchestrator | Wednesday 18 March 2026 04:53:25 +0000 (0:00:00.137) 0:09:57.518 ******* 2026-03-18 04:53:38.356089 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356102 | orchestrator | 2026-03-18 04:53:38.356114 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:53:38.356126 | orchestrator | Wednesday 18 March 2026 04:53:26 +0000 (0:00:00.414) 0:09:57.932 ******* 2026-03-18 04:53:38.356139 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-18 04:53:38.356150 | orchestrator | 2026-03-18 04:53:38.356163 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:53:38.356175 | orchestrator | Wednesday 18 March 2026 04:53:26 +0000 (0:00:00.228) 0:09:58.161 ******* 2026-03-18 04:53:38.356187 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.356200 | orchestrator | 2026-03-18 04:53:38.356212 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:53:38.356225 | orchestrator | Wednesday 18 March 2026 04:53:27 +0000 (0:00:00.714) 0:09:58.876 ******* 2026-03-18 04:53:38.356237 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:53:38.356248 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:53:38.356259 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:53:38.356270 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356281 | orchestrator | 2026-03-18 04:53:38.356291 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:53:38.356302 | orchestrator | Wednesday 18 March 2026 04:53:27 +0000 (0:00:00.156) 0:09:59.032 ******* 2026-03-18 04:53:38.356313 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356323 | orchestrator | 2026-03-18 04:53:38.356334 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:53:38.356345 | orchestrator | Wednesday 18 March 2026 04:53:27 +0000 (0:00:00.158) 0:09:59.191 ******* 2026-03-18 04:53:38.356355 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356366 | orchestrator | 2026-03-18 04:53:38.356377 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:53:38.356395 | orchestrator | Wednesday 18 March 2026 04:53:27 +0000 (0:00:00.173) 0:09:59.364 ******* 2026-03-18 04:53:38.356406 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356417 | orchestrator | 2026-03-18 04:53:38.356427 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:53:38.356438 | orchestrator | Wednesday 18 March 2026 04:53:27 +0000 (0:00:00.171) 0:09:59.536 ******* 2026-03-18 04:53:38.356449 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356460 | orchestrator | 2026-03-18 04:53:38.356489 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:53:38.356500 | orchestrator | Wednesday 18 March 2026 04:53:28 +0000 (0:00:00.168) 0:09:59.705 ******* 2026-03-18 04:53:38.356511 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356522 | orchestrator | 2026-03-18 04:53:38.356532 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:53:38.356543 | orchestrator | Wednesday 18 March 2026 04:53:28 +0000 (0:00:00.166) 0:09:59.871 ******* 2026-03-18 04:53:38.356554 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.356564 | orchestrator | 2026-03-18 04:53:38.356575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:53:38.356586 | orchestrator | Wednesday 18 March 2026 04:53:29 +0000 (0:00:01.615) 0:10:01.486 ******* 2026-03-18 04:53:38.356596 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.356642 | orchestrator | 2026-03-18 04:53:38.356655 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:53:38.356666 | orchestrator | Wednesday 18 March 2026 04:53:30 +0000 (0:00:00.158) 0:10:01.645 ******* 2026-03-18 04:53:38.356676 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-18 04:53:38.356687 | orchestrator | 2026-03-18 04:53:38.356704 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:53:38.356715 | orchestrator | Wednesday 18 March 2026 04:53:30 +0000 (0:00:00.521) 0:10:02.167 ******* 2026-03-18 04:53:38.356725 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356736 | orchestrator | 2026-03-18 04:53:38.356747 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:53:38.356757 | orchestrator | Wednesday 18 March 2026 04:53:30 +0000 (0:00:00.165) 0:10:02.332 ******* 2026-03-18 04:53:38.356768 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356778 | orchestrator | 2026-03-18 04:53:38.356789 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:53:38.356800 | orchestrator | Wednesday 18 March 2026 04:53:30 +0000 (0:00:00.160) 0:10:02.493 ******* 2026-03-18 04:53:38.356810 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356821 | orchestrator | 2026-03-18 04:53:38.356832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:53:38.356842 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.160) 0:10:02.653 ******* 2026-03-18 04:53:38.356853 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356863 | orchestrator | 2026-03-18 04:53:38.356874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:53:38.356884 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.162) 0:10:02.816 ******* 2026-03-18 04:53:38.356895 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356906 | orchestrator | 2026-03-18 04:53:38.356916 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:53:38.356927 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.155) 0:10:02.971 ******* 2026-03-18 04:53:38.356937 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356948 | orchestrator | 2026-03-18 04:53:38.356959 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:53:38.356969 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.170) 0:10:03.142 ******* 2026-03-18 04:53:38.356980 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.356998 | orchestrator | 2026-03-18 04:53:38.357009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:53:38.357019 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.169) 0:10:03.312 ******* 2026-03-18 04:53:38.357030 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:38.357041 | orchestrator | 2026-03-18 04:53:38.357051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:53:38.357062 | orchestrator | Wednesday 18 March 2026 04:53:31 +0000 (0:00:00.161) 0:10:03.473 ******* 2026-03-18 04:53:38.357073 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:38.357083 | orchestrator | 2026-03-18 04:53:38.357094 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:53:38.357105 | orchestrator | Wednesday 18 March 2026 04:53:32 +0000 (0:00:00.226) 0:10:03.699 ******* 2026-03-18 04:53:38.357115 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-18 04:53:38.357126 | orchestrator | 2026-03-18 04:53:38.357137 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:53:38.357147 | orchestrator | Wednesday 18 March 2026 04:53:32 +0000 (0:00:00.204) 0:10:03.904 ******* 2026-03-18 04:53:38.357158 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-18 04:53:38.357169 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-18 04:53:38.357180 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-18 04:53:38.357190 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-18 04:53:38.357201 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-18 04:53:38.357212 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-18 04:53:38.357222 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-18 04:53:38.357233 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:53:38.357244 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:53:38.357255 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:53:38.357265 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:53:38.357276 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:53:38.357287 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:53:38.357297 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:53:38.357308 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-18 04:53:38.357319 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-18 04:53:38.357330 | orchestrator | 2026-03-18 04:53:38.357346 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:53:57.410318 | orchestrator | Wednesday 18 March 2026 04:53:38 +0000 (0:00:06.042) 0:10:09.946 ******* 2026-03-18 04:53:57.410434 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410452 | orchestrator | 2026-03-18 04:53:57.410465 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:53:57.410476 | orchestrator | Wednesday 18 March 2026 04:53:38 +0000 (0:00:00.163) 0:10:10.110 ******* 2026-03-18 04:53:57.410487 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410498 | orchestrator | 2026-03-18 04:53:57.410509 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:53:57.410520 | orchestrator | Wednesday 18 March 2026 04:53:38 +0000 (0:00:00.147) 0:10:10.257 ******* 2026-03-18 04:53:57.410531 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410542 | orchestrator | 2026-03-18 04:53:57.410553 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:53:57.410564 | orchestrator | Wednesday 18 March 2026 04:53:38 +0000 (0:00:00.155) 0:10:10.413 ******* 2026-03-18 04:53:57.410575 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410585 | orchestrator | 2026-03-18 04:53:57.410596 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:53:57.410662 | orchestrator | Wednesday 18 March 2026 04:53:38 +0000 (0:00:00.149) 0:10:10.563 ******* 2026-03-18 04:53:57.410676 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410686 | orchestrator | 2026-03-18 04:53:57.410698 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:53:57.410709 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.157) 0:10:10.720 ******* 2026-03-18 04:53:57.410720 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410731 | orchestrator | 2026-03-18 04:53:57.410742 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:53:57.410755 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.135) 0:10:10.855 ******* 2026-03-18 04:53:57.410765 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410776 | orchestrator | 2026-03-18 04:53:57.410787 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:53:57.410798 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.144) 0:10:11.000 ******* 2026-03-18 04:53:57.410809 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410820 | orchestrator | 2026-03-18 04:53:57.410831 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:53:57.410842 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.130) 0:10:11.130 ******* 2026-03-18 04:53:57.410853 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410864 | orchestrator | 2026-03-18 04:53:57.410874 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:53:57.410885 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.145) 0:10:11.276 ******* 2026-03-18 04:53:57.410896 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410907 | orchestrator | 2026-03-18 04:53:57.410918 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:53:57.410928 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.125) 0:10:11.402 ******* 2026-03-18 04:53:57.410939 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410950 | orchestrator | 2026-03-18 04:53:57.410961 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:53:57.410971 | orchestrator | Wednesday 18 March 2026 04:53:39 +0000 (0:00:00.141) 0:10:11.544 ******* 2026-03-18 04:53:57.410982 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.410993 | orchestrator | 2026-03-18 04:53:57.411003 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:53:57.411014 | orchestrator | Wednesday 18 March 2026 04:53:40 +0000 (0:00:00.162) 0:10:11.706 ******* 2026-03-18 04:53:57.411025 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411035 | orchestrator | 2026-03-18 04:53:57.411047 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:53:57.411100 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.963) 0:10:12.670 ******* 2026-03-18 04:53:57.411113 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411124 | orchestrator | 2026-03-18 04:53:57.411134 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:53:57.411145 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.139) 0:10:12.810 ******* 2026-03-18 04:53:57.411156 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411167 | orchestrator | 2026-03-18 04:53:57.411178 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:53:57.411189 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.236) 0:10:13.046 ******* 2026-03-18 04:53:57.411199 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411210 | orchestrator | 2026-03-18 04:53:57.411221 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:53:57.411232 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.149) 0:10:13.196 ******* 2026-03-18 04:53:57.411242 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411262 | orchestrator | 2026-03-18 04:53:57.411273 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:53:57.411286 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.149) 0:10:13.345 ******* 2026-03-18 04:53:57.411297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411307 | orchestrator | 2026-03-18 04:53:57.411318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:53:57.411329 | orchestrator | Wednesday 18 March 2026 04:53:41 +0000 (0:00:00.153) 0:10:13.499 ******* 2026-03-18 04:53:57.411339 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411350 | orchestrator | 2026-03-18 04:53:57.411360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:53:57.411371 | orchestrator | Wednesday 18 March 2026 04:53:42 +0000 (0:00:00.174) 0:10:13.673 ******* 2026-03-18 04:53:57.411382 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411393 | orchestrator | 2026-03-18 04:53:57.411422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:53:57.411433 | orchestrator | Wednesday 18 March 2026 04:53:42 +0000 (0:00:00.139) 0:10:13.813 ******* 2026-03-18 04:53:57.411444 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411455 | orchestrator | 2026-03-18 04:53:57.411465 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:53:57.411476 | orchestrator | Wednesday 18 March 2026 04:53:42 +0000 (0:00:00.163) 0:10:13.976 ******* 2026-03-18 04:53:57.411487 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:53:57.411498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:53:57.411509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:53:57.411520 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411530 | orchestrator | 2026-03-18 04:53:57.411541 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:53:57.411552 | orchestrator | Wednesday 18 March 2026 04:53:42 +0000 (0:00:00.422) 0:10:14.399 ******* 2026-03-18 04:53:57.411562 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:53:57.411579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:53:57.411590 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:53:57.411600 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411611 | orchestrator | 2026-03-18 04:53:57.411663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:53:57.411677 | orchestrator | Wednesday 18 March 2026 04:53:43 +0000 (0:00:00.419) 0:10:14.819 ******* 2026-03-18 04:53:57.411688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 04:53:57.411699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 04:53:57.411710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-18 04:53:57.411720 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411731 | orchestrator | 2026-03-18 04:53:57.411742 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:53:57.411753 | orchestrator | Wednesday 18 March 2026 04:53:43 +0000 (0:00:00.401) 0:10:15.220 ******* 2026-03-18 04:53:57.411764 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411775 | orchestrator | 2026-03-18 04:53:57.411786 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:53:57.411796 | orchestrator | Wednesday 18 March 2026 04:53:43 +0000 (0:00:00.127) 0:10:15.348 ******* 2026-03-18 04:53:57.411808 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-18 04:53:57.411819 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.411829 | orchestrator | 2026-03-18 04:53:57.411840 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:53:57.411851 | orchestrator | Wednesday 18 March 2026 04:53:44 +0000 (0:00:00.635) 0:10:15.983 ******* 2026-03-18 04:53:57.411870 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:57.411881 | orchestrator | 2026-03-18 04:53:57.411892 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:53:57.411902 | orchestrator | Wednesday 18 March 2026 04:53:45 +0000 (0:00:00.870) 0:10:16.854 ******* 2026-03-18 04:53:57.411913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 04:53:57.411924 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:53:57.411935 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:53:57.411946 | orchestrator | 2026-03-18 04:53:57.411956 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:53:57.411967 | orchestrator | Wednesday 18 March 2026 04:53:45 +0000 (0:00:00.675) 0:10:17.530 ******* 2026-03-18 04:53:57.411978 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-18 04:53:57.411989 | orchestrator | 2026-03-18 04:53:57.411999 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-18 04:53:57.412010 | orchestrator | Wednesday 18 March 2026 04:53:46 +0000 (0:00:00.651) 0:10:18.182 ******* 2026-03-18 04:53:57.412020 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:57.412031 | orchestrator | 2026-03-18 04:53:57.412042 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-18 04:53:57.412052 | orchestrator | Wednesday 18 March 2026 04:53:47 +0000 (0:00:00.545) 0:10:18.727 ******* 2026-03-18 04:53:57.412063 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:53:57.412074 | orchestrator | 2026-03-18 04:53:57.412084 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-18 04:53:57.412095 | orchestrator | Wednesday 18 March 2026 04:53:47 +0000 (0:00:00.149) 0:10:18.876 ******* 2026-03-18 04:53:57.412106 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 04:53:57.412118 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 04:53:57.412128 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 04:53:57.412139 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-18 04:53:57.412150 | orchestrator | 2026-03-18 04:53:57.412161 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-18 04:53:57.412171 | orchestrator | Wednesday 18 March 2026 04:53:53 +0000 (0:00:06.703) 0:10:25.579 ******* 2026-03-18 04:53:57.412186 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:53:57.412197 | orchestrator | 2026-03-18 04:53:57.412208 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-18 04:53:57.412218 | orchestrator | Wednesday 18 March 2026 04:53:54 +0000 (0:00:00.188) 0:10:25.768 ******* 2026-03-18 04:53:57.412229 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-18 04:53:57.412240 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 04:53:57.412250 | orchestrator | 2026-03-18 04:53:57.412261 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-18 04:53:57.412272 | orchestrator | Wednesday 18 March 2026 04:53:56 +0000 (0:00:02.230) 0:10:27.998 ******* 2026-03-18 04:53:57.412290 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-18 04:54:26.947511 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-18 04:54:26.947624 | orchestrator | 2026-03-18 04:54:26.947640 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-18 04:54:26.947683 | orchestrator | Wednesday 18 March 2026 04:53:57 +0000 (0:00:01.020) 0:10:29.019 ******* 2026-03-18 04:54:26.947694 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:54:26.947705 | orchestrator | 2026-03-18 04:54:26.947715 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-18 04:54:26.947725 | orchestrator | Wednesday 18 March 2026 04:53:57 +0000 (0:00:00.526) 0:10:29.545 ******* 2026-03-18 04:54:26.947735 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:54:26.947745 | orchestrator | 2026-03-18 04:54:26.947754 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:54:26.947786 | orchestrator | Wednesday 18 March 2026 04:53:58 +0000 (0:00:00.431) 0:10:29.976 ******* 2026-03-18 04:54:26.947796 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:54:26.947806 | orchestrator | 2026-03-18 04:54:26.947815 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:54:26.947839 | orchestrator | Wednesday 18 March 2026 04:53:58 +0000 (0:00:00.156) 0:10:30.133 ******* 2026-03-18 04:54:26.947849 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-18 04:54:26.947859 | orchestrator | 2026-03-18 04:54:26.947869 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-18 04:54:26.947878 | orchestrator | Wednesday 18 March 2026 04:53:59 +0000 (0:00:00.651) 0:10:30.785 ******* 2026-03-18 04:54:26.947887 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:54:26.947897 | orchestrator | 2026-03-18 04:54:26.947906 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-18 04:54:26.947916 | orchestrator | Wednesday 18 March 2026 04:53:59 +0000 (0:00:00.164) 0:10:30.949 ******* 2026-03-18 04:54:26.947925 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:54:26.947934 | orchestrator | 2026-03-18 04:54:26.947944 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-18 04:54:26.947953 | orchestrator | Wednesday 18 March 2026 04:53:59 +0000 (0:00:00.152) 0:10:31.101 ******* 2026-03-18 04:54:26.947962 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-18 04:54:26.947972 | orchestrator | 2026-03-18 04:54:26.947981 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-18 04:54:26.947991 | orchestrator | Wednesday 18 March 2026 04:54:00 +0000 (0:00:00.634) 0:10:31.736 ******* 2026-03-18 04:54:26.948000 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:54:26.948010 | orchestrator | 2026-03-18 04:54:26.948019 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-18 04:54:26.948029 | orchestrator | Wednesday 18 March 2026 04:54:01 +0000 (0:00:01.055) 0:10:32.791 ******* 2026-03-18 04:54:26.948038 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:54:26.948048 | orchestrator | 2026-03-18 04:54:26.948059 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-18 04:54:26.948071 | orchestrator | Wednesday 18 March 2026 04:54:02 +0000 (0:00:00.983) 0:10:33.775 ******* 2026-03-18 04:54:26.948081 | orchestrator | ok: [testbed-node-0] 2026-03-18 04:54:26.948092 | orchestrator | 2026-03-18 04:54:26.948103 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-18 04:54:26.948114 | orchestrator | Wednesday 18 March 2026 04:54:03 +0000 (0:00:01.406) 0:10:35.182 ******* 2026-03-18 04:54:26.948125 | orchestrator | changed: [testbed-node-0] 2026-03-18 04:54:26.948136 | orchestrator | 2026-03-18 04:54:26.948146 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:54:26.948157 | orchestrator | Wednesday 18 March 2026 04:54:06 +0000 (0:00:02.868) 0:10:38.050 ******* 2026-03-18 04:54:26.948168 | orchestrator | skipping: [testbed-node-0] 2026-03-18 04:54:26.948178 | orchestrator | 2026-03-18 04:54:26.948189 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-18 04:54:26.948199 | orchestrator | 2026-03-18 04:54:26.948210 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:54:26.948221 | orchestrator | Wednesday 18 March 2026 04:54:07 +0000 (0:00:00.902) 0:10:38.953 ******* 2026-03-18 04:54:26.948232 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:54:26.948242 | orchestrator | 2026-03-18 04:54:26.948253 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-18 04:54:26.948264 | orchestrator | Wednesday 18 March 2026 04:54:19 +0000 (0:00:11.762) 0:10:50.715 ******* 2026-03-18 04:54:26.948275 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:54:26.948285 | orchestrator | 2026-03-18 04:54:26.948295 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:54:26.948306 | orchestrator | Wednesday 18 March 2026 04:54:20 +0000 (0:00:01.413) 0:10:52.129 ******* 2026-03-18 04:54:26.948325 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-18 04:54:26.948336 | orchestrator | 2026-03-18 04:54:26.948347 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:54:26.948358 | orchestrator | Wednesday 18 March 2026 04:54:20 +0000 (0:00:00.238) 0:10:52.367 ******* 2026-03-18 04:54:26.948367 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948377 | orchestrator | 2026-03-18 04:54:26.948386 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:54:26.948395 | orchestrator | Wednesday 18 March 2026 04:54:21 +0000 (0:00:00.402) 0:10:52.770 ******* 2026-03-18 04:54:26.948405 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948414 | orchestrator | 2026-03-18 04:54:26.948423 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:54:26.948433 | orchestrator | Wednesday 18 March 2026 04:54:21 +0000 (0:00:00.157) 0:10:52.927 ******* 2026-03-18 04:54:26.948442 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948451 | orchestrator | 2026-03-18 04:54:26.948461 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:54:26.948470 | orchestrator | Wednesday 18 March 2026 04:54:21 +0000 (0:00:00.458) 0:10:53.386 ******* 2026-03-18 04:54:26.948479 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948489 | orchestrator | 2026-03-18 04:54:26.948514 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:54:26.948524 | orchestrator | Wednesday 18 March 2026 04:54:21 +0000 (0:00:00.149) 0:10:53.535 ******* 2026-03-18 04:54:26.948533 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948543 | orchestrator | 2026-03-18 04:54:26.948552 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:54:26.948561 | orchestrator | Wednesday 18 March 2026 04:54:22 +0000 (0:00:00.137) 0:10:53.673 ******* 2026-03-18 04:54:26.948571 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948580 | orchestrator | 2026-03-18 04:54:26.948590 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:54:26.948599 | orchestrator | Wednesday 18 March 2026 04:54:22 +0000 (0:00:00.175) 0:10:53.848 ******* 2026-03-18 04:54:26.948608 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:26.948618 | orchestrator | 2026-03-18 04:54:26.948627 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:54:26.948636 | orchestrator | Wednesday 18 March 2026 04:54:22 +0000 (0:00:00.141) 0:10:53.990 ******* 2026-03-18 04:54:26.948667 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948679 | orchestrator | 2026-03-18 04:54:26.948693 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:54:26.948703 | orchestrator | Wednesday 18 March 2026 04:54:22 +0000 (0:00:00.394) 0:10:54.384 ******* 2026-03-18 04:54:26.948712 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:54:26.948722 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:54:26.948732 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:54:26.948741 | orchestrator | 2026-03-18 04:54:26.948750 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:54:26.948760 | orchestrator | Wednesday 18 March 2026 04:54:23 +0000 (0:00:00.712) 0:10:55.096 ******* 2026-03-18 04:54:26.948769 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:26.948779 | orchestrator | 2026-03-18 04:54:26.948788 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:54:26.948798 | orchestrator | Wednesday 18 March 2026 04:54:23 +0000 (0:00:00.283) 0:10:55.380 ******* 2026-03-18 04:54:26.948807 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:54:26.948816 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:54:26.948826 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:54:26.948842 | orchestrator | 2026-03-18 04:54:26.948852 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:54:26.948861 | orchestrator | Wednesday 18 March 2026 04:54:25 +0000 (0:00:01.922) 0:10:57.302 ******* 2026-03-18 04:54:26.948870 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:54:26.948880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:54:26.948890 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:54:26.948899 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:26.948909 | orchestrator | 2026-03-18 04:54:26.948918 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:54:26.948927 | orchestrator | Wednesday 18 March 2026 04:54:26 +0000 (0:00:00.440) 0:10:57.743 ******* 2026-03-18 04:54:26.948939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:54:26.948951 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:54:26.948961 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:54:26.948970 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:26.948980 | orchestrator | 2026-03-18 04:54:26.948989 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:54:26.948999 | orchestrator | Wednesday 18 March 2026 04:54:26 +0000 (0:00:00.636) 0:10:58.380 ******* 2026-03-18 04:54:26.949010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:26.949023 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:26.949041 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:30.836280 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836390 | orchestrator | 2026-03-18 04:54:30.836408 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:54:30.836421 | orchestrator | Wednesday 18 March 2026 04:54:26 +0000 (0:00:00.169) 0:10:58.549 ******* 2026-03-18 04:54:30.836453 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:54:24.313636', 'end': '2026-03-18 04:54:24.366763', 'delta': '0:00:00.053127', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:54:30.836492 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:54:24.877696', 'end': '2026-03-18 04:54:24.930967', 'delta': '0:00:00.053271', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:54:30.836504 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:54:25.487274', 'end': '2026-03-18 04:54:25.531810', 'delta': '0:00:00.044536', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:54:30.836515 | orchestrator | 2026-03-18 04:54:30.836527 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:54:30.836537 | orchestrator | Wednesday 18 March 2026 04:54:27 +0000 (0:00:00.206) 0:10:58.756 ******* 2026-03-18 04:54:30.836549 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:30.836560 | orchestrator | 2026-03-18 04:54:30.836571 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:54:30.836582 | orchestrator | Wednesday 18 March 2026 04:54:27 +0000 (0:00:00.266) 0:10:59.022 ******* 2026-03-18 04:54:30.836592 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836603 | orchestrator | 2026-03-18 04:54:30.836614 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:54:30.836625 | orchestrator | Wednesday 18 March 2026 04:54:27 +0000 (0:00:00.250) 0:10:59.273 ******* 2026-03-18 04:54:30.836636 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:30.836647 | orchestrator | 2026-03-18 04:54:30.836698 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:54:30.836712 | orchestrator | Wednesday 18 March 2026 04:54:27 +0000 (0:00:00.166) 0:10:59.440 ******* 2026-03-18 04:54:30.836722 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:54:30.836733 | orchestrator | 2026-03-18 04:54:30.836743 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:54:30.836754 | orchestrator | Wednesday 18 March 2026 04:54:28 +0000 (0:00:00.842) 0:11:00.282 ******* 2026-03-18 04:54:30.836765 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:30.836775 | orchestrator | 2026-03-18 04:54:30.836786 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:54:30.836797 | orchestrator | Wednesday 18 March 2026 04:54:28 +0000 (0:00:00.171) 0:11:00.454 ******* 2026-03-18 04:54:30.836809 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836820 | orchestrator | 2026-03-18 04:54:30.836833 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:54:30.836845 | orchestrator | Wednesday 18 March 2026 04:54:29 +0000 (0:00:00.453) 0:11:00.907 ******* 2026-03-18 04:54:30.836866 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836878 | orchestrator | 2026-03-18 04:54:30.836891 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:54:30.836903 | orchestrator | Wednesday 18 March 2026 04:54:29 +0000 (0:00:00.240) 0:11:01.147 ******* 2026-03-18 04:54:30.836915 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836927 | orchestrator | 2026-03-18 04:54:30.836955 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:54:30.836968 | orchestrator | Wednesday 18 March 2026 04:54:29 +0000 (0:00:00.167) 0:11:01.315 ******* 2026-03-18 04:54:30.836981 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.836993 | orchestrator | 2026-03-18 04:54:30.837006 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:54:30.837019 | orchestrator | Wednesday 18 March 2026 04:54:29 +0000 (0:00:00.154) 0:11:01.469 ******* 2026-03-18 04:54:30.837031 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.837044 | orchestrator | 2026-03-18 04:54:30.837061 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:54:30.837074 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.159) 0:11:01.628 ******* 2026-03-18 04:54:30.837087 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.837099 | orchestrator | 2026-03-18 04:54:30.837112 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:54:30.837124 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.147) 0:11:01.775 ******* 2026-03-18 04:54:30.837137 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.837149 | orchestrator | 2026-03-18 04:54:30.837161 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:54:30.837173 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.146) 0:11:01.921 ******* 2026-03-18 04:54:30.837183 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.837194 | orchestrator | 2026-03-18 04:54:30.837205 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:54:30.837216 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.130) 0:11:02.052 ******* 2026-03-18 04:54:30.837227 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:30.837238 | orchestrator | 2026-03-18 04:54:30.837248 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:54:30.837259 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.139) 0:11:02.192 ******* 2026-03-18 04:54:30.837271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:30.837285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:30.837296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:30.837308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:54:30.837328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:30.837340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:30.837359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:31.105006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:54:31.105103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:31.105141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:54:31.105176 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:31.105190 | orchestrator | 2026-03-18 04:54:31.105202 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:54:31.105214 | orchestrator | Wednesday 18 March 2026 04:54:30 +0000 (0:00:00.251) 0:11:02.443 ******* 2026-03-18 04:54:31.105227 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105276 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105299 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105344 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105357 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105388 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:31.105419 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a74f897f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_a74f897f-9956-4887-8b8d-6711f76e2ca2-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:41.896438 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:41.896595 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:54:41.896613 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.896627 | orchestrator | 2026-03-18 04:54:41.896639 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:54:41.896651 | orchestrator | Wednesday 18 March 2026 04:54:31 +0000 (0:00:00.271) 0:11:02.715 ******* 2026-03-18 04:54:41.896710 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.896724 | orchestrator | 2026-03-18 04:54:41.896735 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:54:41.896746 | orchestrator | Wednesday 18 March 2026 04:54:31 +0000 (0:00:00.502) 0:11:03.217 ******* 2026-03-18 04:54:41.896756 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.896767 | orchestrator | 2026-03-18 04:54:41.896778 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:54:41.896789 | orchestrator | Wednesday 18 March 2026 04:54:31 +0000 (0:00:00.138) 0:11:03.356 ******* 2026-03-18 04:54:41.896800 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.896811 | orchestrator | 2026-03-18 04:54:41.896822 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:54:41.896833 | orchestrator | Wednesday 18 March 2026 04:54:32 +0000 (0:00:00.832) 0:11:04.189 ******* 2026-03-18 04:54:41.896844 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.896854 | orchestrator | 2026-03-18 04:54:41.896865 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:54:41.896877 | orchestrator | Wednesday 18 March 2026 04:54:32 +0000 (0:00:00.153) 0:11:04.342 ******* 2026-03-18 04:54:41.896895 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.896913 | orchestrator | 2026-03-18 04:54:41.896948 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:54:41.896969 | orchestrator | Wednesday 18 March 2026 04:54:32 +0000 (0:00:00.254) 0:11:04.597 ******* 2026-03-18 04:54:41.896989 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897007 | orchestrator | 2026-03-18 04:54:41.897023 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:54:41.897036 | orchestrator | Wednesday 18 March 2026 04:54:33 +0000 (0:00:00.158) 0:11:04.755 ******* 2026-03-18 04:54:41.897049 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-18 04:54:41.897061 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:54:41.897074 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-18 04:54:41.897086 | orchestrator | 2026-03-18 04:54:41.897098 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:54:41.897110 | orchestrator | Wednesday 18 March 2026 04:54:33 +0000 (0:00:00.697) 0:11:05.453 ******* 2026-03-18 04:54:41.897122 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-18 04:54:41.897135 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-18 04:54:41.897157 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-18 04:54:41.897170 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897182 | orchestrator | 2026-03-18 04:54:41.897194 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:54:41.897206 | orchestrator | Wednesday 18 March 2026 04:54:34 +0000 (0:00:00.182) 0:11:05.636 ******* 2026-03-18 04:54:41.897218 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897230 | orchestrator | 2026-03-18 04:54:41.897242 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:54:41.897254 | orchestrator | Wednesday 18 March 2026 04:54:34 +0000 (0:00:00.158) 0:11:05.794 ******* 2026-03-18 04:54:41.897266 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:54:41.897279 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:54:41.897291 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:54:41.897303 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:54:41.897316 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:54:41.897328 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:54:41.897357 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:54:41.897369 | orchestrator | 2026-03-18 04:54:41.897380 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:54:41.897390 | orchestrator | Wednesday 18 March 2026 04:54:35 +0000 (0:00:01.238) 0:11:07.033 ******* 2026-03-18 04:54:41.897401 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:54:41.897412 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:54:41.897423 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:54:41.897433 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:54:41.897444 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:54:41.897454 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:54:41.897465 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:54:41.897476 | orchestrator | 2026-03-18 04:54:41.897486 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:54:41.897498 | orchestrator | Wednesday 18 March 2026 04:54:37 +0000 (0:00:01.676) 0:11:08.710 ******* 2026-03-18 04:54:41.897508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-18 04:54:41.897520 | orchestrator | 2026-03-18 04:54:41.897531 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:54:41.897542 | orchestrator | Wednesday 18 March 2026 04:54:37 +0000 (0:00:00.215) 0:11:08.925 ******* 2026-03-18 04:54:41.897552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-18 04:54:41.897563 | orchestrator | 2026-03-18 04:54:41.897574 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:54:41.897585 | orchestrator | Wednesday 18 March 2026 04:54:37 +0000 (0:00:00.490) 0:11:09.415 ******* 2026-03-18 04:54:41.897595 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.897606 | orchestrator | 2026-03-18 04:54:41.897617 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:54:41.897627 | orchestrator | Wednesday 18 March 2026 04:54:38 +0000 (0:00:00.585) 0:11:10.001 ******* 2026-03-18 04:54:41.897638 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897649 | orchestrator | 2026-03-18 04:54:41.897684 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:54:41.897707 | orchestrator | Wednesday 18 March 2026 04:54:38 +0000 (0:00:00.138) 0:11:10.139 ******* 2026-03-18 04:54:41.897718 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897729 | orchestrator | 2026-03-18 04:54:41.897740 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:54:41.897750 | orchestrator | Wednesday 18 March 2026 04:54:38 +0000 (0:00:00.140) 0:11:10.279 ******* 2026-03-18 04:54:41.897761 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897771 | orchestrator | 2026-03-18 04:54:41.897782 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:54:41.897798 | orchestrator | Wednesday 18 March 2026 04:54:38 +0000 (0:00:00.144) 0:11:10.424 ******* 2026-03-18 04:54:41.897809 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.897820 | orchestrator | 2026-03-18 04:54:41.897831 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:54:41.897842 | orchestrator | Wednesday 18 March 2026 04:54:39 +0000 (0:00:00.618) 0:11:11.042 ******* 2026-03-18 04:54:41.897852 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897863 | orchestrator | 2026-03-18 04:54:41.897873 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:54:41.897893 | orchestrator | Wednesday 18 March 2026 04:54:39 +0000 (0:00:00.143) 0:11:11.186 ******* 2026-03-18 04:54:41.897911 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.897929 | orchestrator | 2026-03-18 04:54:41.897947 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:54:41.897966 | orchestrator | Wednesday 18 March 2026 04:54:39 +0000 (0:00:00.146) 0:11:11.333 ******* 2026-03-18 04:54:41.897984 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.898003 | orchestrator | 2026-03-18 04:54:41.898095 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:54:41.898107 | orchestrator | Wednesday 18 March 2026 04:54:40 +0000 (0:00:00.564) 0:11:11.897 ******* 2026-03-18 04:54:41.898118 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.898128 | orchestrator | 2026-03-18 04:54:41.898139 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:54:41.898150 | orchestrator | Wednesday 18 March 2026 04:54:40 +0000 (0:00:00.540) 0:11:12.438 ******* 2026-03-18 04:54:41.898160 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.898171 | orchestrator | 2026-03-18 04:54:41.898181 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:54:41.898192 | orchestrator | Wednesday 18 March 2026 04:54:40 +0000 (0:00:00.155) 0:11:12.594 ******* 2026-03-18 04:54:41.898202 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:41.898213 | orchestrator | 2026-03-18 04:54:41.898223 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:54:41.898234 | orchestrator | Wednesday 18 March 2026 04:54:41 +0000 (0:00:00.167) 0:11:12.761 ******* 2026-03-18 04:54:41.898244 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.898255 | orchestrator | 2026-03-18 04:54:41.898265 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:54:41.898276 | orchestrator | Wednesday 18 March 2026 04:54:41 +0000 (0:00:00.183) 0:11:12.945 ******* 2026-03-18 04:54:41.898286 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:41.898297 | orchestrator | 2026-03-18 04:54:41.898307 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:54:41.898318 | orchestrator | Wednesday 18 March 2026 04:54:41 +0000 (0:00:00.138) 0:11:13.083 ******* 2026-03-18 04:54:41.898338 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.066832 | orchestrator | 2026-03-18 04:54:54.066951 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:54:54.066968 | orchestrator | Wednesday 18 March 2026 04:54:41 +0000 (0:00:00.418) 0:11:13.501 ******* 2026-03-18 04:54:54.066980 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.066992 | orchestrator | 2026-03-18 04:54:54.067004 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:54:54.067040 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.134) 0:11:13.635 ******* 2026-03-18 04:54:54.067052 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067063 | orchestrator | 2026-03-18 04:54:54.067074 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:54:54.067084 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.150) 0:11:13.786 ******* 2026-03-18 04:54:54.067095 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.067107 | orchestrator | 2026-03-18 04:54:54.067118 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:54:54.067128 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.174) 0:11:13.961 ******* 2026-03-18 04:54:54.067139 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.067150 | orchestrator | 2026-03-18 04:54:54.067161 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:54:54.067171 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.187) 0:11:14.148 ******* 2026-03-18 04:54:54.067182 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.067193 | orchestrator | 2026-03-18 04:54:54.067203 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:54:54.067215 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.251) 0:11:14.400 ******* 2026-03-18 04:54:54.067226 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067237 | orchestrator | 2026-03-18 04:54:54.067247 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:54:54.067258 | orchestrator | Wednesday 18 March 2026 04:54:42 +0000 (0:00:00.153) 0:11:14.553 ******* 2026-03-18 04:54:54.067269 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067280 | orchestrator | 2026-03-18 04:54:54.067290 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:54:54.067304 | orchestrator | Wednesday 18 March 2026 04:54:43 +0000 (0:00:00.140) 0:11:14.693 ******* 2026-03-18 04:54:54.067317 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067329 | orchestrator | 2026-03-18 04:54:54.067342 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:54:54.067355 | orchestrator | Wednesday 18 March 2026 04:54:43 +0000 (0:00:00.133) 0:11:14.826 ******* 2026-03-18 04:54:54.067367 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067380 | orchestrator | 2026-03-18 04:54:54.067393 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:54:54.067405 | orchestrator | Wednesday 18 March 2026 04:54:43 +0000 (0:00:00.143) 0:11:14.970 ******* 2026-03-18 04:54:54.067418 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067430 | orchestrator | 2026-03-18 04:54:54.067443 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:54:54.067455 | orchestrator | Wednesday 18 March 2026 04:54:43 +0000 (0:00:00.121) 0:11:15.091 ******* 2026-03-18 04:54:54.067468 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067480 | orchestrator | 2026-03-18 04:54:54.067508 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:54:54.067521 | orchestrator | Wednesday 18 March 2026 04:54:43 +0000 (0:00:00.136) 0:11:15.227 ******* 2026-03-18 04:54:54.067534 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067547 | orchestrator | 2026-03-18 04:54:54.067560 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:54:54.067574 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.458) 0:11:15.686 ******* 2026-03-18 04:54:54.067586 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067599 | orchestrator | 2026-03-18 04:54:54.067611 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:54:54.067624 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.143) 0:11:15.829 ******* 2026-03-18 04:54:54.067636 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067649 | orchestrator | 2026-03-18 04:54:54.067662 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:54:54.067718 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.126) 0:11:15.956 ******* 2026-03-18 04:54:54.067731 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067741 | orchestrator | 2026-03-18 04:54:54.067752 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:54:54.067763 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.127) 0:11:16.084 ******* 2026-03-18 04:54:54.067774 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067784 | orchestrator | 2026-03-18 04:54:54.067795 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:54:54.067805 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.135) 0:11:16.219 ******* 2026-03-18 04:54:54.067816 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.067827 | orchestrator | 2026-03-18 04:54:54.067837 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:54:54.067848 | orchestrator | Wednesday 18 March 2026 04:54:44 +0000 (0:00:00.270) 0:11:16.490 ******* 2026-03-18 04:54:54.067859 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.067869 | orchestrator | 2026-03-18 04:54:54.067880 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:54:54.067890 | orchestrator | Wednesday 18 March 2026 04:54:45 +0000 (0:00:00.998) 0:11:17.489 ******* 2026-03-18 04:54:54.067901 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.067912 | orchestrator | 2026-03-18 04:54:54.067922 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:54:54.067933 | orchestrator | Wednesday 18 March 2026 04:54:47 +0000 (0:00:01.375) 0:11:18.865 ******* 2026-03-18 04:54:54.067944 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-18 04:54:54.067956 | orchestrator | 2026-03-18 04:54:54.067982 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:54:54.067994 | orchestrator | Wednesday 18 March 2026 04:54:47 +0000 (0:00:00.206) 0:11:19.071 ******* 2026-03-18 04:54:54.068005 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068016 | orchestrator | 2026-03-18 04:54:54.068027 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:54:54.068038 | orchestrator | Wednesday 18 March 2026 04:54:47 +0000 (0:00:00.145) 0:11:19.217 ******* 2026-03-18 04:54:54.068048 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068059 | orchestrator | 2026-03-18 04:54:54.068070 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:54:54.068080 | orchestrator | Wednesday 18 March 2026 04:54:47 +0000 (0:00:00.158) 0:11:19.376 ******* 2026-03-18 04:54:54.068091 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:54:54.068101 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:54:54.068112 | orchestrator | 2026-03-18 04:54:54.068123 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:54:54.068134 | orchestrator | Wednesday 18 March 2026 04:54:48 +0000 (0:00:01.168) 0:11:20.544 ******* 2026-03-18 04:54:54.068145 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.068155 | orchestrator | 2026-03-18 04:54:54.068166 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:54:54.068177 | orchestrator | Wednesday 18 March 2026 04:54:49 +0000 (0:00:00.492) 0:11:21.036 ******* 2026-03-18 04:54:54.068187 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068198 | orchestrator | 2026-03-18 04:54:54.068209 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:54:54.068220 | orchestrator | Wednesday 18 March 2026 04:54:49 +0000 (0:00:00.154) 0:11:21.190 ******* 2026-03-18 04:54:54.068230 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068241 | orchestrator | 2026-03-18 04:54:54.068252 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:54:54.068269 | orchestrator | Wednesday 18 March 2026 04:54:49 +0000 (0:00:00.163) 0:11:21.354 ******* 2026-03-18 04:54:54.068280 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068291 | orchestrator | 2026-03-18 04:54:54.068302 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:54:54.068313 | orchestrator | Wednesday 18 March 2026 04:54:49 +0000 (0:00:00.149) 0:11:21.504 ******* 2026-03-18 04:54:54.068323 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-18 04:54:54.068334 | orchestrator | 2026-03-18 04:54:54.068345 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:54:54.068355 | orchestrator | Wednesday 18 March 2026 04:54:50 +0000 (0:00:00.229) 0:11:21.733 ******* 2026-03-18 04:54:54.068366 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.068377 | orchestrator | 2026-03-18 04:54:54.068388 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:54:54.068398 | orchestrator | Wednesday 18 March 2026 04:54:50 +0000 (0:00:00.772) 0:11:22.506 ******* 2026-03-18 04:54:54.068409 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:54:54.068425 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:54:54.068436 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:54:54.068447 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068457 | orchestrator | 2026-03-18 04:54:54.068468 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:54:54.068479 | orchestrator | Wednesday 18 March 2026 04:54:51 +0000 (0:00:00.149) 0:11:22.655 ******* 2026-03-18 04:54:54.068490 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068500 | orchestrator | 2026-03-18 04:54:54.068511 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:54:54.068522 | orchestrator | Wednesday 18 March 2026 04:54:51 +0000 (0:00:00.130) 0:11:22.785 ******* 2026-03-18 04:54:54.068532 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068543 | orchestrator | 2026-03-18 04:54:54.068554 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:54:54.068565 | orchestrator | Wednesday 18 March 2026 04:54:51 +0000 (0:00:00.175) 0:11:22.961 ******* 2026-03-18 04:54:54.068575 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068586 | orchestrator | 2026-03-18 04:54:54.068597 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:54:54.068608 | orchestrator | Wednesday 18 March 2026 04:54:51 +0000 (0:00:00.151) 0:11:23.112 ******* 2026-03-18 04:54:54.068619 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068629 | orchestrator | 2026-03-18 04:54:54.068640 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:54:54.068651 | orchestrator | Wednesday 18 March 2026 04:54:51 +0000 (0:00:00.156) 0:11:23.269 ******* 2026-03-18 04:54:54.068662 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:54:54.068689 | orchestrator | 2026-03-18 04:54:54.068700 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:54:54.068711 | orchestrator | Wednesday 18 March 2026 04:54:52 +0000 (0:00:00.442) 0:11:23.712 ******* 2026-03-18 04:54:54.068722 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.068732 | orchestrator | 2026-03-18 04:54:54.068743 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:54:54.068754 | orchestrator | Wednesday 18 March 2026 04:54:53 +0000 (0:00:01.585) 0:11:25.298 ******* 2026-03-18 04:54:54.068764 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:54:54.068775 | orchestrator | 2026-03-18 04:54:54.068786 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:54:54.068797 | orchestrator | Wednesday 18 March 2026 04:54:53 +0000 (0:00:00.146) 0:11:25.444 ******* 2026-03-18 04:54:54.068807 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-18 04:54:54.068825 | orchestrator | 2026-03-18 04:54:54.068843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:55:07.131997 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.227) 0:11:25.672 ******* 2026-03-18 04:55:07.132101 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132117 | orchestrator | 2026-03-18 04:55:07.132131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:55:07.132142 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.155) 0:11:25.828 ******* 2026-03-18 04:55:07.132153 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132163 | orchestrator | 2026-03-18 04:55:07.132175 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:55:07.132185 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.154) 0:11:25.982 ******* 2026-03-18 04:55:07.132196 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132207 | orchestrator | 2026-03-18 04:55:07.132217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:55:07.132228 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.162) 0:11:26.145 ******* 2026-03-18 04:55:07.132239 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132250 | orchestrator | 2026-03-18 04:55:07.132260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:55:07.132271 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.157) 0:11:26.303 ******* 2026-03-18 04:55:07.132282 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132293 | orchestrator | 2026-03-18 04:55:07.132304 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:55:07.132314 | orchestrator | Wednesday 18 March 2026 04:54:54 +0000 (0:00:00.165) 0:11:26.468 ******* 2026-03-18 04:55:07.132325 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132336 | orchestrator | 2026-03-18 04:55:07.132346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:55:07.132357 | orchestrator | Wednesday 18 March 2026 04:54:55 +0000 (0:00:00.153) 0:11:26.622 ******* 2026-03-18 04:55:07.132368 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132378 | orchestrator | 2026-03-18 04:55:07.132389 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:55:07.132400 | orchestrator | Wednesday 18 March 2026 04:54:55 +0000 (0:00:00.174) 0:11:26.796 ******* 2026-03-18 04:55:07.132411 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132421 | orchestrator | 2026-03-18 04:55:07.132432 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:55:07.132443 | orchestrator | Wednesday 18 March 2026 04:54:55 +0000 (0:00:00.153) 0:11:26.950 ******* 2026-03-18 04:55:07.132455 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:07.132466 | orchestrator | 2026-03-18 04:55:07.132477 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:55:07.132487 | orchestrator | Wednesday 18 March 2026 04:54:55 +0000 (0:00:00.509) 0:11:27.460 ******* 2026-03-18 04:55:07.132498 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-18 04:55:07.132509 | orchestrator | 2026-03-18 04:55:07.132520 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:55:07.132531 | orchestrator | Wednesday 18 March 2026 04:54:56 +0000 (0:00:00.222) 0:11:27.682 ******* 2026-03-18 04:55:07.132557 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-18 04:55:07.132572 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-18 04:55:07.132586 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-18 04:55:07.132598 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-18 04:55:07.132610 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-18 04:55:07.132622 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-18 04:55:07.132635 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-18 04:55:07.132670 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:55:07.132709 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:55:07.132723 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:55:07.132735 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:55:07.132748 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:55:07.132760 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:55:07.132773 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:55:07.132785 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-18 04:55:07.132798 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-18 04:55:07.132811 | orchestrator | 2026-03-18 04:55:07.132823 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:55:07.132836 | orchestrator | Wednesday 18 March 2026 04:55:01 +0000 (0:00:05.748) 0:11:33.431 ******* 2026-03-18 04:55:07.132848 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132862 | orchestrator | 2026-03-18 04:55:07.132874 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:55:07.132887 | orchestrator | Wednesday 18 March 2026 04:55:01 +0000 (0:00:00.141) 0:11:33.572 ******* 2026-03-18 04:55:07.132900 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132912 | orchestrator | 2026-03-18 04:55:07.132923 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:55:07.132934 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.137) 0:11:33.709 ******* 2026-03-18 04:55:07.132944 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132955 | orchestrator | 2026-03-18 04:55:07.132965 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:55:07.132976 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.133) 0:11:33.843 ******* 2026-03-18 04:55:07.132986 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.132997 | orchestrator | 2026-03-18 04:55:07.133008 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:55:07.133033 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.141) 0:11:33.985 ******* 2026-03-18 04:55:07.133045 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133055 | orchestrator | 2026-03-18 04:55:07.133066 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:55:07.133077 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.143) 0:11:34.129 ******* 2026-03-18 04:55:07.133088 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133099 | orchestrator | 2026-03-18 04:55:07.133110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:55:07.133121 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.130) 0:11:34.260 ******* 2026-03-18 04:55:07.133131 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133142 | orchestrator | 2026-03-18 04:55:07.133153 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:55:07.133164 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.146) 0:11:34.406 ******* 2026-03-18 04:55:07.133175 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133185 | orchestrator | 2026-03-18 04:55:07.133196 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:55:07.133207 | orchestrator | Wednesday 18 March 2026 04:55:02 +0000 (0:00:00.120) 0:11:34.527 ******* 2026-03-18 04:55:07.133218 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133228 | orchestrator | 2026-03-18 04:55:07.133239 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:55:07.133250 | orchestrator | Wednesday 18 March 2026 04:55:03 +0000 (0:00:00.140) 0:11:34.667 ******* 2026-03-18 04:55:07.133261 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133280 | orchestrator | 2026-03-18 04:55:07.133290 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:55:07.133301 | orchestrator | Wednesday 18 March 2026 04:55:03 +0000 (0:00:00.421) 0:11:35.089 ******* 2026-03-18 04:55:07.133312 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133322 | orchestrator | 2026-03-18 04:55:07.133333 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:55:07.133344 | orchestrator | Wednesday 18 March 2026 04:55:03 +0000 (0:00:00.147) 0:11:35.236 ******* 2026-03-18 04:55:07.133355 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133365 | orchestrator | 2026-03-18 04:55:07.133376 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:55:07.133387 | orchestrator | Wednesday 18 March 2026 04:55:03 +0000 (0:00:00.138) 0:11:35.375 ******* 2026-03-18 04:55:07.133398 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133408 | orchestrator | 2026-03-18 04:55:07.133419 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:55:07.133430 | orchestrator | Wednesday 18 March 2026 04:55:03 +0000 (0:00:00.243) 0:11:35.618 ******* 2026-03-18 04:55:07.133441 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133452 | orchestrator | 2026-03-18 04:55:07.133462 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:55:07.133473 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.138) 0:11:35.756 ******* 2026-03-18 04:55:07.133484 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133494 | orchestrator | 2026-03-18 04:55:07.133510 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:55:07.133521 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.236) 0:11:35.993 ******* 2026-03-18 04:55:07.133532 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133543 | orchestrator | 2026-03-18 04:55:07.133554 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:55:07.133565 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.154) 0:11:36.147 ******* 2026-03-18 04:55:07.133575 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133586 | orchestrator | 2026-03-18 04:55:07.133597 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:55:07.133610 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.154) 0:11:36.302 ******* 2026-03-18 04:55:07.133620 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133631 | orchestrator | 2026-03-18 04:55:07.133642 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:55:07.133652 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.148) 0:11:36.450 ******* 2026-03-18 04:55:07.133663 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133674 | orchestrator | 2026-03-18 04:55:07.133712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:55:07.133731 | orchestrator | Wednesday 18 March 2026 04:55:04 +0000 (0:00:00.139) 0:11:36.590 ******* 2026-03-18 04:55:07.133750 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133768 | orchestrator | 2026-03-18 04:55:07.133780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:55:07.133791 | orchestrator | Wednesday 18 March 2026 04:55:05 +0000 (0:00:00.159) 0:11:36.750 ******* 2026-03-18 04:55:07.133801 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133812 | orchestrator | 2026-03-18 04:55:07.133822 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:55:07.133833 | orchestrator | Wednesday 18 March 2026 04:55:05 +0000 (0:00:00.143) 0:11:36.894 ******* 2026-03-18 04:55:07.133844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:55:07.133855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:55:07.133866 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:55:07.133884 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:07.133895 | orchestrator | 2026-03-18 04:55:07.133905 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:55:07.133916 | orchestrator | Wednesday 18 March 2026 04:55:06 +0000 (0:00:00.759) 0:11:37.654 ******* 2026-03-18 04:55:07.133927 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:55:07.133946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:55:35.310123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:55:35.310274 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.310301 | orchestrator | 2026-03-18 04:55:35.310323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:55:35.310345 | orchestrator | Wednesday 18 March 2026 04:55:07 +0000 (0:00:01.081) 0:11:38.736 ******* 2026-03-18 04:55:35.310363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-18 04:55:35.310382 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-18 04:55:35.310401 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-18 04:55:35.310420 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.310439 | orchestrator | 2026-03-18 04:55:35.310459 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:55:35.310478 | orchestrator | Wednesday 18 March 2026 04:55:07 +0000 (0:00:00.392) 0:11:39.128 ******* 2026-03-18 04:55:35.310497 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.310516 | orchestrator | 2026-03-18 04:55:35.310537 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:55:35.310566 | orchestrator | Wednesday 18 March 2026 04:55:07 +0000 (0:00:00.140) 0:11:39.269 ******* 2026-03-18 04:55:35.310597 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-18 04:55:35.310628 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.310650 | orchestrator | 2026-03-18 04:55:35.310670 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:55:35.310694 | orchestrator | Wednesday 18 March 2026 04:55:07 +0000 (0:00:00.295) 0:11:39.564 ******* 2026-03-18 04:55:35.310755 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.310784 | orchestrator | 2026-03-18 04:55:35.310812 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:55:35.310838 | orchestrator | Wednesday 18 March 2026 04:55:08 +0000 (0:00:00.819) 0:11:40.384 ******* 2026-03-18 04:55:35.310865 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:55:35.310890 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-18 04:55:35.310910 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:55:35.310928 | orchestrator | 2026-03-18 04:55:35.310947 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:55:35.310967 | orchestrator | Wednesday 18 March 2026 04:55:09 +0000 (0:00:00.598) 0:11:40.983 ******* 2026-03-18 04:55:35.310986 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-18 04:55:35.311004 | orchestrator | 2026-03-18 04:55:35.311023 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-18 04:55:35.311042 | orchestrator | Wednesday 18 March 2026 04:55:09 +0000 (0:00:00.204) 0:11:41.187 ******* 2026-03-18 04:55:35.311061 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.311079 | orchestrator | 2026-03-18 04:55:35.311098 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-18 04:55:35.311117 | orchestrator | Wednesday 18 March 2026 04:55:10 +0000 (0:00:00.519) 0:11:41.706 ******* 2026-03-18 04:55:35.311155 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.311175 | orchestrator | 2026-03-18 04:55:35.311194 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-18 04:55:35.311214 | orchestrator | Wednesday 18 March 2026 04:55:10 +0000 (0:00:00.128) 0:11:41.834 ******* 2026-03-18 04:55:35.311263 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:55:35.311282 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:55:35.311301 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:55:35.311320 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-18 04:55:35.311338 | orchestrator | 2026-03-18 04:55:35.311358 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-18 04:55:35.311377 | orchestrator | Wednesday 18 March 2026 04:55:16 +0000 (0:00:06.284) 0:11:48.119 ******* 2026-03-18 04:55:35.311395 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.311414 | orchestrator | 2026-03-18 04:55:35.311432 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-18 04:55:35.311451 | orchestrator | Wednesday 18 March 2026 04:55:16 +0000 (0:00:00.474) 0:11:48.594 ******* 2026-03-18 04:55:35.311469 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 04:55:35.311487 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-18 04:55:35.311506 | orchestrator | 2026-03-18 04:55:35.311524 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-18 04:55:35.311542 | orchestrator | Wednesday 18 March 2026 04:55:19 +0000 (0:00:02.251) 0:11:50.845 ******* 2026-03-18 04:55:35.311561 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-18 04:55:35.311579 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-18 04:55:35.311597 | orchestrator | 2026-03-18 04:55:35.311616 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-18 04:55:35.311635 | orchestrator | Wednesday 18 March 2026 04:55:20 +0000 (0:00:01.151) 0:11:51.997 ******* 2026-03-18 04:55:35.311653 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.311671 | orchestrator | 2026-03-18 04:55:35.311688 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-18 04:55:35.311767 | orchestrator | Wednesday 18 March 2026 04:55:20 +0000 (0:00:00.506) 0:11:52.504 ******* 2026-03-18 04:55:35.311786 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.311803 | orchestrator | 2026-03-18 04:55:35.311815 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:55:35.311826 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.172) 0:11:52.677 ******* 2026-03-18 04:55:35.311837 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.311847 | orchestrator | 2026-03-18 04:55:35.311858 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:55:35.311894 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.136) 0:11:52.814 ******* 2026-03-18 04:55:35.311913 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-18 04:55:35.311930 | orchestrator | 2026-03-18 04:55:35.311948 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-18 04:55:35.311969 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.257) 0:11:53.071 ******* 2026-03-18 04:55:35.311987 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.312003 | orchestrator | 2026-03-18 04:55:35.312014 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-18 04:55:35.312023 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.159) 0:11:53.231 ******* 2026-03-18 04:55:35.312040 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.312055 | orchestrator | 2026-03-18 04:55:35.312071 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-18 04:55:35.312088 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.140) 0:11:53.372 ******* 2026-03-18 04:55:35.312103 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-18 04:55:35.312119 | orchestrator | 2026-03-18 04:55:35.312134 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-18 04:55:35.312149 | orchestrator | Wednesday 18 March 2026 04:55:21 +0000 (0:00:00.211) 0:11:53.584 ******* 2026-03-18 04:55:35.312182 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.312199 | orchestrator | 2026-03-18 04:55:35.312215 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-18 04:55:35.312232 | orchestrator | Wednesday 18 March 2026 04:55:23 +0000 (0:00:01.075) 0:11:54.659 ******* 2026-03-18 04:55:35.312248 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.312264 | orchestrator | 2026-03-18 04:55:35.312274 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-18 04:55:35.312284 | orchestrator | Wednesday 18 March 2026 04:55:24 +0000 (0:00:01.325) 0:11:55.984 ******* 2026-03-18 04:55:35.312293 | orchestrator | ok: [testbed-node-1] 2026-03-18 04:55:35.312303 | orchestrator | 2026-03-18 04:55:35.312313 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-18 04:55:35.312322 | orchestrator | Wednesday 18 March 2026 04:55:25 +0000 (0:00:01.458) 0:11:57.443 ******* 2026-03-18 04:55:35.312332 | orchestrator | changed: [testbed-node-1] 2026-03-18 04:55:35.312341 | orchestrator | 2026-03-18 04:55:35.312351 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:55:35.312360 | orchestrator | Wednesday 18 March 2026 04:55:28 +0000 (0:00:02.909) 0:12:00.353 ******* 2026-03-18 04:55:35.312370 | orchestrator | skipping: [testbed-node-1] 2026-03-18 04:55:35.312379 | orchestrator | 2026-03-18 04:55:35.312389 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-18 04:55:35.312398 | orchestrator | 2026-03-18 04:55:35.312408 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-18 04:55:35.312417 | orchestrator | Wednesday 18 March 2026 04:55:29 +0000 (0:00:00.684) 0:12:01.038 ******* 2026-03-18 04:55:35.312427 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:55:35.312436 | orchestrator | 2026-03-18 04:55:35.312446 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-18 04:55:35.312464 | orchestrator | Wednesday 18 March 2026 04:55:31 +0000 (0:00:01.930) 0:12:02.968 ******* 2026-03-18 04:55:35.312473 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:55:35.312483 | orchestrator | 2026-03-18 04:55:35.312492 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:55:35.312502 | orchestrator | Wednesday 18 March 2026 04:55:32 +0000 (0:00:01.552) 0:12:04.520 ******* 2026-03-18 04:55:35.312511 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-18 04:55:35.312521 | orchestrator | 2026-03-18 04:55:35.312530 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:55:35.312540 | orchestrator | Wednesday 18 March 2026 04:55:33 +0000 (0:00:00.284) 0:12:04.805 ******* 2026-03-18 04:55:35.312550 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312559 | orchestrator | 2026-03-18 04:55:35.312569 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:55:35.312578 | orchestrator | Wednesday 18 March 2026 04:55:33 +0000 (0:00:00.467) 0:12:05.273 ******* 2026-03-18 04:55:35.312587 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312597 | orchestrator | 2026-03-18 04:55:35.312607 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:55:35.312616 | orchestrator | Wednesday 18 March 2026 04:55:33 +0000 (0:00:00.148) 0:12:05.421 ******* 2026-03-18 04:55:35.312626 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312638 | orchestrator | 2026-03-18 04:55:35.312654 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:55:35.312674 | orchestrator | Wednesday 18 March 2026 04:55:34 +0000 (0:00:00.446) 0:12:05.868 ******* 2026-03-18 04:55:35.312698 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312774 | orchestrator | 2026-03-18 04:55:35.312790 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:55:35.312807 | orchestrator | Wednesday 18 March 2026 04:55:34 +0000 (0:00:00.425) 0:12:06.293 ******* 2026-03-18 04:55:35.312823 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312838 | orchestrator | 2026-03-18 04:55:35.312862 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:55:35.312872 | orchestrator | Wednesday 18 March 2026 04:55:34 +0000 (0:00:00.150) 0:12:06.444 ******* 2026-03-18 04:55:35.312881 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.312891 | orchestrator | 2026-03-18 04:55:35.312900 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:55:35.312918 | orchestrator | Wednesday 18 March 2026 04:55:34 +0000 (0:00:00.167) 0:12:06.611 ******* 2026-03-18 04:55:35.312937 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:35.312962 | orchestrator | 2026-03-18 04:55:35.312978 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:55:35.312994 | orchestrator | Wednesday 18 March 2026 04:55:35 +0000 (0:00:00.152) 0:12:06.764 ******* 2026-03-18 04:55:35.313010 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:35.313025 | orchestrator | 2026-03-18 04:55:35.313056 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:55:43.067896 | orchestrator | Wednesday 18 March 2026 04:55:35 +0000 (0:00:00.152) 0:12:06.916 ******* 2026-03-18 04:55:43.068010 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:55:43.068028 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:55:43.068041 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:55:43.068054 | orchestrator | 2026-03-18 04:55:43.068066 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:55:43.068078 | orchestrator | Wednesday 18 March 2026 04:55:35 +0000 (0:00:00.691) 0:12:07.608 ******* 2026-03-18 04:55:43.068089 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:43.068101 | orchestrator | 2026-03-18 04:55:43.068112 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:55:43.068123 | orchestrator | Wednesday 18 March 2026 04:55:36 +0000 (0:00:00.268) 0:12:07.876 ******* 2026-03-18 04:55:43.068134 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:55:43.068145 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:55:43.068156 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:55:43.068167 | orchestrator | 2026-03-18 04:55:43.068178 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:55:43.068188 | orchestrator | Wednesday 18 March 2026 04:55:38 +0000 (0:00:01.819) 0:12:09.696 ******* 2026-03-18 04:55:43.068200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:55:43.068211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:55:43.068222 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:55:43.068233 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.068244 | orchestrator | 2026-03-18 04:55:43.068255 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:55:43.068266 | orchestrator | Wednesday 18 March 2026 04:55:38 +0000 (0:00:00.426) 0:12:10.123 ******* 2026-03-18 04:55:43.068279 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068328 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068350 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.068396 | orchestrator | 2026-03-18 04:55:43.068418 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:55:43.068441 | orchestrator | Wednesday 18 March 2026 04:55:39 +0000 (0:00:00.927) 0:12:11.051 ******* 2026-03-18 04:55:43.068465 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068482 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068494 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:43.068505 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.068516 | orchestrator | 2026-03-18 04:55:43.068527 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:55:43.068538 | orchestrator | Wednesday 18 March 2026 04:55:39 +0000 (0:00:00.190) 0:12:11.241 ******* 2026-03-18 04:55:43.068571 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:55:36.759297', 'end': '2026-03-18 04:55:36.801154', 'delta': '0:00:00.041857', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:55:43.068586 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:55:37.325539', 'end': '2026-03-18 04:55:37.368422', 'delta': '0:00:00.042883', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:55:43.068597 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:55:37.879931', 'end': '2026-03-18 04:55:37.929742', 'delta': '0:00:00.049811', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:55:43.068618 | orchestrator | 2026-03-18 04:55:43.068637 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:55:43.068648 | orchestrator | Wednesday 18 March 2026 04:55:39 +0000 (0:00:00.202) 0:12:11.443 ******* 2026-03-18 04:55:43.068659 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:43.068670 | orchestrator | 2026-03-18 04:55:43.068680 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:55:43.068691 | orchestrator | Wednesday 18 March 2026 04:55:40 +0000 (0:00:00.274) 0:12:11.718 ******* 2026-03-18 04:55:43.068702 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.068755 | orchestrator | 2026-03-18 04:55:43.068766 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:55:43.068777 | orchestrator | Wednesday 18 March 2026 04:55:41 +0000 (0:00:00.937) 0:12:12.656 ******* 2026-03-18 04:55:43.068788 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:43.068799 | orchestrator | 2026-03-18 04:55:43.068810 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:55:43.068820 | orchestrator | Wednesday 18 March 2026 04:55:41 +0000 (0:00:00.155) 0:12:12.811 ******* 2026-03-18 04:55:43.068831 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:55:43.068842 | orchestrator | 2026-03-18 04:55:43.068853 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:55:43.068863 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.884) 0:12:13.696 ******* 2026-03-18 04:55:43.068874 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:43.068885 | orchestrator | 2026-03-18 04:55:43.068896 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:55:43.068907 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.147) 0:12:13.844 ******* 2026-03-18 04:55:43.068917 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.068934 | orchestrator | 2026-03-18 04:55:43.068953 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:55:43.068971 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.145) 0:12:13.989 ******* 2026-03-18 04:55:43.068989 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.069005 | orchestrator | 2026-03-18 04:55:43.069023 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:55:43.069043 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.260) 0:12:14.250 ******* 2026-03-18 04:55:43.069063 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.069081 | orchestrator | 2026-03-18 04:55:43.069100 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:55:43.069112 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.136) 0:12:14.387 ******* 2026-03-18 04:55:43.069123 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.069133 | orchestrator | 2026-03-18 04:55:43.069144 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:55:43.069155 | orchestrator | Wednesday 18 March 2026 04:55:42 +0000 (0:00:00.126) 0:12:14.514 ******* 2026-03-18 04:55:43.069166 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:43.069177 | orchestrator | 2026-03-18 04:55:43.069198 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:55:44.837208 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.157) 0:12:14.672 ******* 2026-03-18 04:55:44.837312 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:44.837329 | orchestrator | 2026-03-18 04:55:44.837342 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:55:44.837354 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.151) 0:12:14.823 ******* 2026-03-18 04:55:44.837365 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:44.837376 | orchestrator | 2026-03-18 04:55:44.837388 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:55:44.837422 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.163) 0:12:14.986 ******* 2026-03-18 04:55:44.837434 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:44.837445 | orchestrator | 2026-03-18 04:55:44.837455 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:55:44.837467 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.144) 0:12:15.131 ******* 2026-03-18 04:55:44.837477 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:44.837488 | orchestrator | 2026-03-18 04:55:44.837499 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:55:44.837509 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.141) 0:12:15.272 ******* 2026-03-18 04:55:44.837522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:55:44.837591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:55:44.837679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:55:44.837702 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:44.837740 | orchestrator | 2026-03-18 04:55:44.837753 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:55:44.837766 | orchestrator | Wednesday 18 March 2026 04:55:43 +0000 (0:00:00.237) 0:12:15.510 ******* 2026-03-18 04:55:44.837779 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:44.837803 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.475083 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.475925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.475973 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.475987 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.475998 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.476034 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bbfcb729', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1', 'scsi-SQEMU_QEMU_HARDDISK_bbfcb729-d4f0-4316-9872-0560f57ec1dc-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.476070 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.476081 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:55:46.476092 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:46.476104 | orchestrator | 2026-03-18 04:55:46.476115 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:55:46.476126 | orchestrator | Wednesday 18 March 2026 04:55:44 +0000 (0:00:00.937) 0:12:16.448 ******* 2026-03-18 04:55:46.476136 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:46.476146 | orchestrator | 2026-03-18 04:55:46.476156 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:55:46.476172 | orchestrator | Wednesday 18 March 2026 04:55:45 +0000 (0:00:00.480) 0:12:16.928 ******* 2026-03-18 04:55:46.476182 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:46.476192 | orchestrator | 2026-03-18 04:55:46.476201 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:55:46.476211 | orchestrator | Wednesday 18 March 2026 04:55:45 +0000 (0:00:00.146) 0:12:17.075 ******* 2026-03-18 04:55:46.476221 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:46.476230 | orchestrator | 2026-03-18 04:55:46.476240 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:55:46.476250 | orchestrator | Wednesday 18 March 2026 04:55:45 +0000 (0:00:00.432) 0:12:17.507 ******* 2026-03-18 04:55:46.476259 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:46.476269 | orchestrator | 2026-03-18 04:55:46.476279 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:55:46.476289 | orchestrator | Wednesday 18 March 2026 04:55:46 +0000 (0:00:00.149) 0:12:17.656 ******* 2026-03-18 04:55:46.476299 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:46.476308 | orchestrator | 2026-03-18 04:55:46.476318 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:55:46.476327 | orchestrator | Wednesday 18 March 2026 04:55:46 +0000 (0:00:00.265) 0:12:17.921 ******* 2026-03-18 04:55:46.476337 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:46.476346 | orchestrator | 2026-03-18 04:55:46.476356 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:55:46.476373 | orchestrator | Wednesday 18 March 2026 04:55:46 +0000 (0:00:00.161) 0:12:18.083 ******* 2026-03-18 04:55:57.685224 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-18 04:55:57.685368 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-18 04:55:57.685392 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:55:57.685412 | orchestrator | 2026-03-18 04:55:57.685431 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:55:57.685451 | orchestrator | Wednesday 18 March 2026 04:55:47 +0000 (0:00:00.952) 0:12:19.035 ******* 2026-03-18 04:55:57.685470 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-18 04:55:57.685488 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-18 04:55:57.685506 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-18 04:55:57.685523 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.685540 | orchestrator | 2026-03-18 04:55:57.685559 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:55:57.685576 | orchestrator | Wednesday 18 March 2026 04:55:47 +0000 (0:00:00.163) 0:12:19.198 ******* 2026-03-18 04:55:57.685595 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.685612 | orchestrator | 2026-03-18 04:55:57.685630 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:55:57.685648 | orchestrator | Wednesday 18 March 2026 04:55:47 +0000 (0:00:00.155) 0:12:19.354 ******* 2026-03-18 04:55:57.685665 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:55:57.685683 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:55:57.685700 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:55:57.685751 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:55:57.685774 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:55:57.685794 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:55:57.685817 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:55:57.685838 | orchestrator | 2026-03-18 04:55:57.685858 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:55:57.685877 | orchestrator | Wednesday 18 March 2026 04:55:48 +0000 (0:00:01.180) 0:12:20.535 ******* 2026-03-18 04:55:57.685930 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:55:57.685970 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:55:57.685991 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:55:57.686011 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 04:55:57.686100 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:55:57.686120 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:55:57.686138 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:55:57.686160 | orchestrator | 2026-03-18 04:55:57.686178 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:55:57.686197 | orchestrator | Wednesday 18 March 2026 04:55:50 +0000 (0:00:01.750) 0:12:22.285 ******* 2026-03-18 04:55:57.686214 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-18 04:55:57.686237 | orchestrator | 2026-03-18 04:55:57.686255 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:55:57.686273 | orchestrator | Wednesday 18 March 2026 04:55:51 +0000 (0:00:00.530) 0:12:22.816 ******* 2026-03-18 04:55:57.686291 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-18 04:55:57.686311 | orchestrator | 2026-03-18 04:55:57.686328 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:55:57.686348 | orchestrator | Wednesday 18 March 2026 04:55:51 +0000 (0:00:00.233) 0:12:23.049 ******* 2026-03-18 04:55:57.686365 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.686384 | orchestrator | 2026-03-18 04:55:57.686402 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:55:57.686421 | orchestrator | Wednesday 18 March 2026 04:55:51 +0000 (0:00:00.508) 0:12:23.558 ******* 2026-03-18 04:55:57.686439 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.686459 | orchestrator | 2026-03-18 04:55:57.686477 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:55:57.686496 | orchestrator | Wednesday 18 March 2026 04:55:52 +0000 (0:00:00.135) 0:12:23.694 ******* 2026-03-18 04:55:57.686512 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.686531 | orchestrator | 2026-03-18 04:55:57.686547 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:55:57.686567 | orchestrator | Wednesday 18 March 2026 04:55:52 +0000 (0:00:00.145) 0:12:23.839 ******* 2026-03-18 04:55:57.686584 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.686604 | orchestrator | 2026-03-18 04:55:57.686623 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:55:57.686642 | orchestrator | Wednesday 18 March 2026 04:55:52 +0000 (0:00:00.145) 0:12:23.984 ******* 2026-03-18 04:55:57.686660 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.686678 | orchestrator | 2026-03-18 04:55:57.686696 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:55:57.686714 | orchestrator | Wednesday 18 March 2026 04:55:52 +0000 (0:00:00.517) 0:12:24.502 ******* 2026-03-18 04:55:57.686760 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.686780 | orchestrator | 2026-03-18 04:55:57.686798 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:55:57.686844 | orchestrator | Wednesday 18 March 2026 04:55:53 +0000 (0:00:00.150) 0:12:24.652 ******* 2026-03-18 04:55:57.686863 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.686881 | orchestrator | 2026-03-18 04:55:57.686898 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:55:57.686916 | orchestrator | Wednesday 18 March 2026 04:55:53 +0000 (0:00:00.170) 0:12:24.823 ******* 2026-03-18 04:55:57.686933 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.686968 | orchestrator | 2026-03-18 04:55:57.686987 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:55:57.687005 | orchestrator | Wednesday 18 March 2026 04:55:53 +0000 (0:00:00.487) 0:12:25.310 ******* 2026-03-18 04:55:57.687024 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.687043 | orchestrator | 2026-03-18 04:55:57.687059 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:55:57.687078 | orchestrator | Wednesday 18 March 2026 04:55:54 +0000 (0:00:00.593) 0:12:25.903 ******* 2026-03-18 04:55:57.687097 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687115 | orchestrator | 2026-03-18 04:55:57.687133 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:55:57.687150 | orchestrator | Wednesday 18 March 2026 04:55:54 +0000 (0:00:00.390) 0:12:26.294 ******* 2026-03-18 04:55:57.687168 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.687187 | orchestrator | 2026-03-18 04:55:57.687203 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:55:57.687222 | orchestrator | Wednesday 18 March 2026 04:55:54 +0000 (0:00:00.161) 0:12:26.455 ******* 2026-03-18 04:55:57.687239 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687255 | orchestrator | 2026-03-18 04:55:57.687271 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:55:57.687288 | orchestrator | Wednesday 18 March 2026 04:55:54 +0000 (0:00:00.146) 0:12:26.602 ******* 2026-03-18 04:55:57.687307 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687324 | orchestrator | 2026-03-18 04:55:57.687342 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:55:57.687359 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.162) 0:12:26.765 ******* 2026-03-18 04:55:57.687377 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687395 | orchestrator | 2026-03-18 04:55:57.687413 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:55:57.687432 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.141) 0:12:26.906 ******* 2026-03-18 04:55:57.687451 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687469 | orchestrator | 2026-03-18 04:55:57.687499 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:55:57.687518 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.136) 0:12:27.042 ******* 2026-03-18 04:55:57.687536 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687554 | orchestrator | 2026-03-18 04:55:57.687573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:55:57.687591 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.146) 0:12:27.189 ******* 2026-03-18 04:55:57.687609 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.687626 | orchestrator | 2026-03-18 04:55:57.687645 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:55:57.687662 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.168) 0:12:27.358 ******* 2026-03-18 04:55:57.687679 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.687698 | orchestrator | 2026-03-18 04:55:57.687716 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:55:57.687761 | orchestrator | Wednesday 18 March 2026 04:55:55 +0000 (0:00:00.184) 0:12:27.543 ******* 2026-03-18 04:55:57.687781 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:55:57.687799 | orchestrator | 2026-03-18 04:55:57.687816 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:55:57.687833 | orchestrator | Wednesday 18 March 2026 04:55:56 +0000 (0:00:00.274) 0:12:27.817 ******* 2026-03-18 04:55:57.687851 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687869 | orchestrator | 2026-03-18 04:55:57.687885 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:55:57.687903 | orchestrator | Wednesday 18 March 2026 04:55:56 +0000 (0:00:00.149) 0:12:27.966 ******* 2026-03-18 04:55:57.687921 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.687951 | orchestrator | 2026-03-18 04:55:57.687970 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:55:57.687987 | orchestrator | Wednesday 18 March 2026 04:55:56 +0000 (0:00:00.154) 0:12:28.121 ******* 2026-03-18 04:55:57.688003 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.688021 | orchestrator | 2026-03-18 04:55:57.688038 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:55:57.688056 | orchestrator | Wednesday 18 March 2026 04:55:56 +0000 (0:00:00.440) 0:12:28.561 ******* 2026-03-18 04:55:57.688073 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.688090 | orchestrator | 2026-03-18 04:55:57.688107 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:55:57.688126 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.143) 0:12:28.705 ******* 2026-03-18 04:55:57.688144 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.688163 | orchestrator | 2026-03-18 04:55:57.688181 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:55:57.688199 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.159) 0:12:28.864 ******* 2026-03-18 04:55:57.688214 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.688231 | orchestrator | 2026-03-18 04:55:57.688249 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:55:57.688266 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.140) 0:12:29.005 ******* 2026-03-18 04:55:57.688284 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:55:57.688301 | orchestrator | 2026-03-18 04:55:57.688319 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:55:57.688338 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.142) 0:12:29.147 ******* 2026-03-18 04:55:57.688371 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.460363 | orchestrator | 2026-03-18 04:56:15.460499 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:56:15.460524 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.144) 0:12:29.292 ******* 2026-03-18 04:56:15.460543 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.460561 | orchestrator | 2026-03-18 04:56:15.460576 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:56:15.460588 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.142) 0:12:29.434 ******* 2026-03-18 04:56:15.460606 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.460622 | orchestrator | 2026-03-18 04:56:15.460639 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:56:15.460655 | orchestrator | Wednesday 18 March 2026 04:55:57 +0000 (0:00:00.142) 0:12:29.577 ******* 2026-03-18 04:56:15.460671 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.460688 | orchestrator | 2026-03-18 04:56:15.460704 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:56:15.460721 | orchestrator | Wednesday 18 March 2026 04:55:58 +0000 (0:00:00.128) 0:12:29.705 ******* 2026-03-18 04:56:15.460798 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.460821 | orchestrator | 2026-03-18 04:56:15.460838 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:56:15.460859 | orchestrator | Wednesday 18 March 2026 04:55:58 +0000 (0:00:00.222) 0:12:29.928 ******* 2026-03-18 04:56:15.460881 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.460899 | orchestrator | 2026-03-18 04:56:15.460916 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:56:15.460932 | orchestrator | Wednesday 18 March 2026 04:55:59 +0000 (0:00:00.974) 0:12:30.902 ******* 2026-03-18 04:56:15.460949 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.460965 | orchestrator | 2026-03-18 04:56:15.460982 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:56:15.460999 | orchestrator | Wednesday 18 March 2026 04:56:00 +0000 (0:00:01.428) 0:12:32.331 ******* 2026-03-18 04:56:15.461041 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-18 04:56:15.461060 | orchestrator | 2026-03-18 04:56:15.461076 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:56:15.461092 | orchestrator | Wednesday 18 March 2026 04:56:01 +0000 (0:00:00.542) 0:12:32.874 ******* 2026-03-18 04:56:15.461110 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461125 | orchestrator | 2026-03-18 04:56:15.461141 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:56:15.461176 | orchestrator | Wednesday 18 March 2026 04:56:01 +0000 (0:00:00.129) 0:12:33.004 ******* 2026-03-18 04:56:15.461193 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461208 | orchestrator | 2026-03-18 04:56:15.461225 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:56:15.461241 | orchestrator | Wednesday 18 March 2026 04:56:01 +0000 (0:00:00.151) 0:12:33.156 ******* 2026-03-18 04:56:15.461252 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:56:15.461264 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:56:15.461275 | orchestrator | 2026-03-18 04:56:15.461284 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:56:15.461294 | orchestrator | Wednesday 18 March 2026 04:56:02 +0000 (0:00:00.812) 0:12:33.968 ******* 2026-03-18 04:56:15.461304 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.461318 | orchestrator | 2026-03-18 04:56:15.461332 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:56:15.461341 | orchestrator | Wednesday 18 March 2026 04:56:02 +0000 (0:00:00.507) 0:12:34.476 ******* 2026-03-18 04:56:15.461351 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461360 | orchestrator | 2026-03-18 04:56:15.461369 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:56:15.461379 | orchestrator | Wednesday 18 March 2026 04:56:03 +0000 (0:00:00.156) 0:12:34.632 ******* 2026-03-18 04:56:15.461388 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461402 | orchestrator | 2026-03-18 04:56:15.461418 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:56:15.461433 | orchestrator | Wednesday 18 March 2026 04:56:03 +0000 (0:00:00.133) 0:12:34.765 ******* 2026-03-18 04:56:15.461448 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461464 | orchestrator | 2026-03-18 04:56:15.461479 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:56:15.461495 | orchestrator | Wednesday 18 March 2026 04:56:03 +0000 (0:00:00.140) 0:12:34.905 ******* 2026-03-18 04:56:15.461511 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-18 04:56:15.461527 | orchestrator | 2026-03-18 04:56:15.461542 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:56:15.461552 | orchestrator | Wednesday 18 March 2026 04:56:03 +0000 (0:00:00.216) 0:12:35.122 ******* 2026-03-18 04:56:15.461561 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.461570 | orchestrator | 2026-03-18 04:56:15.461583 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:56:15.461599 | orchestrator | Wednesday 18 March 2026 04:56:04 +0000 (0:00:00.735) 0:12:35.857 ******* 2026-03-18 04:56:15.461675 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:56:15.461685 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:56:15.461694 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:56:15.461704 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461713 | orchestrator | 2026-03-18 04:56:15.461723 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:56:15.461759 | orchestrator | Wednesday 18 March 2026 04:56:04 +0000 (0:00:00.149) 0:12:36.007 ******* 2026-03-18 04:56:15.461803 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461813 | orchestrator | 2026-03-18 04:56:15.461823 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:56:15.461832 | orchestrator | Wednesday 18 March 2026 04:56:04 +0000 (0:00:00.135) 0:12:36.142 ******* 2026-03-18 04:56:15.461841 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461851 | orchestrator | 2026-03-18 04:56:15.461860 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:56:15.461869 | orchestrator | Wednesday 18 March 2026 04:56:05 +0000 (0:00:00.878) 0:12:37.021 ******* 2026-03-18 04:56:15.461879 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461888 | orchestrator | 2026-03-18 04:56:15.461898 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:56:15.461907 | orchestrator | Wednesday 18 March 2026 04:56:05 +0000 (0:00:00.155) 0:12:37.177 ******* 2026-03-18 04:56:15.461917 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461926 | orchestrator | 2026-03-18 04:56:15.461935 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:56:15.461945 | orchestrator | Wednesday 18 March 2026 04:56:05 +0000 (0:00:00.162) 0:12:37.339 ******* 2026-03-18 04:56:15.461954 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.461964 | orchestrator | 2026-03-18 04:56:15.461973 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:56:15.461983 | orchestrator | Wednesday 18 March 2026 04:56:05 +0000 (0:00:00.158) 0:12:37.498 ******* 2026-03-18 04:56:15.461992 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.462009 | orchestrator | 2026-03-18 04:56:15.462104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:56:15.462121 | orchestrator | Wednesday 18 March 2026 04:56:07 +0000 (0:00:01.454) 0:12:38.953 ******* 2026-03-18 04:56:15.462136 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.462152 | orchestrator | 2026-03-18 04:56:15.462167 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:56:15.462184 | orchestrator | Wednesday 18 March 2026 04:56:07 +0000 (0:00:00.163) 0:12:39.116 ******* 2026-03-18 04:56:15.462200 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-18 04:56:15.462216 | orchestrator | 2026-03-18 04:56:15.462231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:56:15.462247 | orchestrator | Wednesday 18 March 2026 04:56:07 +0000 (0:00:00.228) 0:12:39.345 ******* 2026-03-18 04:56:15.462264 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462279 | orchestrator | 2026-03-18 04:56:15.462296 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:56:15.462321 | orchestrator | Wednesday 18 March 2026 04:56:07 +0000 (0:00:00.159) 0:12:39.505 ******* 2026-03-18 04:56:15.462331 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462340 | orchestrator | 2026-03-18 04:56:15.462350 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:56:15.462359 | orchestrator | Wednesday 18 March 2026 04:56:08 +0000 (0:00:00.150) 0:12:39.656 ******* 2026-03-18 04:56:15.462369 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462378 | orchestrator | 2026-03-18 04:56:15.462388 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:56:15.462397 | orchestrator | Wednesday 18 March 2026 04:56:08 +0000 (0:00:00.166) 0:12:39.822 ******* 2026-03-18 04:56:15.462407 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462416 | orchestrator | 2026-03-18 04:56:15.462425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:56:15.462435 | orchestrator | Wednesday 18 March 2026 04:56:08 +0000 (0:00:00.161) 0:12:39.984 ******* 2026-03-18 04:56:15.462444 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462454 | orchestrator | 2026-03-18 04:56:15.462463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:56:15.462519 | orchestrator | Wednesday 18 March 2026 04:56:08 +0000 (0:00:00.163) 0:12:40.148 ******* 2026-03-18 04:56:15.462539 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462549 | orchestrator | 2026-03-18 04:56:15.462559 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:56:15.462568 | orchestrator | Wednesday 18 March 2026 04:56:08 +0000 (0:00:00.447) 0:12:40.595 ******* 2026-03-18 04:56:15.462577 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462587 | orchestrator | 2026-03-18 04:56:15.462596 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:56:15.462605 | orchestrator | Wednesday 18 March 2026 04:56:09 +0000 (0:00:00.163) 0:12:40.758 ******* 2026-03-18 04:56:15.462615 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:15.462624 | orchestrator | 2026-03-18 04:56:15.462634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:56:15.462643 | orchestrator | Wednesday 18 March 2026 04:56:09 +0000 (0:00:00.164) 0:12:40.923 ******* 2026-03-18 04:56:15.462652 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:15.462662 | orchestrator | 2026-03-18 04:56:15.462671 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:56:15.462681 | orchestrator | Wednesday 18 March 2026 04:56:09 +0000 (0:00:00.224) 0:12:41.148 ******* 2026-03-18 04:56:15.462690 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-18 04:56:15.462700 | orchestrator | 2026-03-18 04:56:15.462709 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:56:15.462719 | orchestrator | Wednesday 18 March 2026 04:56:09 +0000 (0:00:00.212) 0:12:41.361 ******* 2026-03-18 04:56:15.462728 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-18 04:56:15.462794 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-18 04:56:15.462805 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-18 04:56:15.462814 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-18 04:56:15.462824 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-18 04:56:15.462834 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-18 04:56:15.462854 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-18 04:56:30.365970 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:56:30.366114 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:56:30.366128 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:56:30.366135 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:56:30.366141 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:56:30.366147 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:56:30.366155 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:56:30.366162 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-18 04:56:30.366169 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-18 04:56:30.366175 | orchestrator | 2026-03-18 04:56:30.366182 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:56:30.366188 | orchestrator | Wednesday 18 March 2026 04:56:15 +0000 (0:00:05.702) 0:12:47.063 ******* 2026-03-18 04:56:30.366194 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366201 | orchestrator | 2026-03-18 04:56:30.366207 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:56:30.366215 | orchestrator | Wednesday 18 March 2026 04:56:15 +0000 (0:00:00.145) 0:12:47.209 ******* 2026-03-18 04:56:30.366222 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366230 | orchestrator | 2026-03-18 04:56:30.366236 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:56:30.366242 | orchestrator | Wednesday 18 March 2026 04:56:15 +0000 (0:00:00.140) 0:12:47.350 ******* 2026-03-18 04:56:30.366271 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366279 | orchestrator | 2026-03-18 04:56:30.366286 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:56:30.366294 | orchestrator | Wednesday 18 March 2026 04:56:15 +0000 (0:00:00.136) 0:12:47.486 ******* 2026-03-18 04:56:30.366300 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366307 | orchestrator | 2026-03-18 04:56:30.366314 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:56:30.366320 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.139) 0:12:47.626 ******* 2026-03-18 04:56:30.366327 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366333 | orchestrator | 2026-03-18 04:56:30.366340 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:56:30.366360 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.146) 0:12:47.773 ******* 2026-03-18 04:56:30.366367 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366374 | orchestrator | 2026-03-18 04:56:30.366381 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:56:30.366389 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.396) 0:12:48.169 ******* 2026-03-18 04:56:30.366395 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366401 | orchestrator | 2026-03-18 04:56:30.366408 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:56:30.366415 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.137) 0:12:48.306 ******* 2026-03-18 04:56:30.366422 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366429 | orchestrator | 2026-03-18 04:56:30.366436 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:56:30.366443 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.144) 0:12:48.450 ******* 2026-03-18 04:56:30.366449 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366455 | orchestrator | 2026-03-18 04:56:30.366462 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:56:30.366468 | orchestrator | Wednesday 18 March 2026 04:56:16 +0000 (0:00:00.141) 0:12:48.592 ******* 2026-03-18 04:56:30.366476 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366482 | orchestrator | 2026-03-18 04:56:30.366489 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:56:30.366495 | orchestrator | Wednesday 18 March 2026 04:56:17 +0000 (0:00:00.161) 0:12:48.754 ******* 2026-03-18 04:56:30.366502 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366508 | orchestrator | 2026-03-18 04:56:30.366516 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:56:30.366524 | orchestrator | Wednesday 18 March 2026 04:56:17 +0000 (0:00:00.154) 0:12:48.909 ******* 2026-03-18 04:56:30.366532 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366541 | orchestrator | 2026-03-18 04:56:30.366548 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:56:30.366556 | orchestrator | Wednesday 18 March 2026 04:56:17 +0000 (0:00:00.138) 0:12:49.047 ******* 2026-03-18 04:56:30.366563 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366570 | orchestrator | 2026-03-18 04:56:30.366577 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:56:30.366585 | orchestrator | Wednesday 18 March 2026 04:56:17 +0000 (0:00:00.253) 0:12:49.301 ******* 2026-03-18 04:56:30.366593 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366601 | orchestrator | 2026-03-18 04:56:30.366609 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:56:30.366615 | orchestrator | Wednesday 18 March 2026 04:56:17 +0000 (0:00:00.151) 0:12:49.452 ******* 2026-03-18 04:56:30.366621 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366628 | orchestrator | 2026-03-18 04:56:30.366635 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:56:30.366649 | orchestrator | Wednesday 18 March 2026 04:56:18 +0000 (0:00:00.217) 0:12:49.669 ******* 2026-03-18 04:56:30.366656 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366664 | orchestrator | 2026-03-18 04:56:30.366671 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:56:30.366678 | orchestrator | Wednesday 18 March 2026 04:56:18 +0000 (0:00:00.139) 0:12:49.809 ******* 2026-03-18 04:56:30.366703 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366710 | orchestrator | 2026-03-18 04:56:30.366718 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:56:30.366728 | orchestrator | Wednesday 18 March 2026 04:56:18 +0000 (0:00:00.132) 0:12:49.942 ******* 2026-03-18 04:56:30.366736 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366743 | orchestrator | 2026-03-18 04:56:30.366799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:56:30.366806 | orchestrator | Wednesday 18 March 2026 04:56:18 +0000 (0:00:00.140) 0:12:50.083 ******* 2026-03-18 04:56:30.366813 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366820 | orchestrator | 2026-03-18 04:56:30.366826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:56:30.366833 | orchestrator | Wednesday 18 March 2026 04:56:18 +0000 (0:00:00.472) 0:12:50.556 ******* 2026-03-18 04:56:30.366840 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366846 | orchestrator | 2026-03-18 04:56:30.366854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:56:30.366861 | orchestrator | Wednesday 18 March 2026 04:56:19 +0000 (0:00:00.149) 0:12:50.705 ******* 2026-03-18 04:56:30.366868 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366875 | orchestrator | 2026-03-18 04:56:30.366882 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:56:30.366888 | orchestrator | Wednesday 18 March 2026 04:56:19 +0000 (0:00:00.133) 0:12:50.838 ******* 2026-03-18 04:56:30.366894 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:56:30.366901 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:56:30.366907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:56:30.366913 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366919 | orchestrator | 2026-03-18 04:56:30.366925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:56:30.366931 | orchestrator | Wednesday 18 March 2026 04:56:19 +0000 (0:00:00.431) 0:12:51.270 ******* 2026-03-18 04:56:30.366937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:56:30.366943 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:56:30.366951 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:56:30.366957 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.366964 | orchestrator | 2026-03-18 04:56:30.366971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:56:30.366984 | orchestrator | Wednesday 18 March 2026 04:56:20 +0000 (0:00:00.428) 0:12:51.698 ******* 2026-03-18 04:56:30.366991 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-18 04:56:30.366998 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-18 04:56:30.367004 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-18 04:56:30.367010 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.367016 | orchestrator | 2026-03-18 04:56:30.367022 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:56:30.367029 | orchestrator | Wednesday 18 March 2026 04:56:20 +0000 (0:00:00.513) 0:12:52.212 ******* 2026-03-18 04:56:30.367035 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.367042 | orchestrator | 2026-03-18 04:56:30.367048 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:56:30.367054 | orchestrator | Wednesday 18 March 2026 04:56:20 +0000 (0:00:00.145) 0:12:52.357 ******* 2026-03-18 04:56:30.367068 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-18 04:56:30.367075 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.367081 | orchestrator | 2026-03-18 04:56:30.367088 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:56:30.367095 | orchestrator | Wednesday 18 March 2026 04:56:21 +0000 (0:00:00.331) 0:12:52.689 ******* 2026-03-18 04:56:30.367101 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:30.367107 | orchestrator | 2026-03-18 04:56:30.367114 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:56:30.367121 | orchestrator | Wednesday 18 March 2026 04:56:21 +0000 (0:00:00.831) 0:12:53.521 ******* 2026-03-18 04:56:30.367127 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:56:30.367135 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:56:30.367141 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-18 04:56:30.367148 | orchestrator | 2026-03-18 04:56:30.367153 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-18 04:56:30.367159 | orchestrator | Wednesday 18 March 2026 04:56:22 +0000 (0:00:00.984) 0:12:54.506 ******* 2026-03-18 04:56:30.367166 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-18 04:56:30.367171 | orchestrator | 2026-03-18 04:56:30.367178 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-18 04:56:30.367185 | orchestrator | Wednesday 18 March 2026 04:56:23 +0000 (0:00:00.543) 0:12:55.049 ******* 2026-03-18 04:56:30.367191 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:30.367198 | orchestrator | 2026-03-18 04:56:30.367204 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-18 04:56:30.367211 | orchestrator | Wednesday 18 March 2026 04:56:23 +0000 (0:00:00.507) 0:12:55.556 ******* 2026-03-18 04:56:30.367217 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:30.367224 | orchestrator | 2026-03-18 04:56:30.367230 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-18 04:56:30.367237 | orchestrator | Wednesday 18 March 2026 04:56:24 +0000 (0:00:00.152) 0:12:55.709 ******* 2026-03-18 04:56:30.367243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:56:30.367249 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:56:30.367264 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 04:56:53.921094 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-18 04:56:53.921237 | orchestrator | 2026-03-18 04:56:53.921269 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-18 04:56:53.921283 | orchestrator | Wednesday 18 March 2026 04:56:30 +0000 (0:00:06.255) 0:13:01.965 ******* 2026-03-18 04:56:53.921295 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.921307 | orchestrator | 2026-03-18 04:56:53.921318 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-18 04:56:53.921329 | orchestrator | Wednesday 18 March 2026 04:56:30 +0000 (0:00:00.177) 0:13:02.143 ******* 2026-03-18 04:56:53.921346 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 04:56:53.921366 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-18 04:56:53.921384 | orchestrator | 2026-03-18 04:56:53.921402 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-18 04:56:53.921419 | orchestrator | Wednesday 18 March 2026 04:56:32 +0000 (0:00:02.178) 0:13:04.321 ******* 2026-03-18 04:56:53.921438 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-18 04:56:53.921457 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-18 04:56:53.921477 | orchestrator | 2026-03-18 04:56:53.921495 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-18 04:56:53.921515 | orchestrator | Wednesday 18 March 2026 04:56:33 +0000 (0:00:01.036) 0:13:05.357 ******* 2026-03-18 04:56:53.921567 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.921588 | orchestrator | 2026-03-18 04:56:53.921607 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-18 04:56:53.921625 | orchestrator | Wednesday 18 March 2026 04:56:34 +0000 (0:00:00.545) 0:13:05.903 ******* 2026-03-18 04:56:53.921644 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.921664 | orchestrator | 2026-03-18 04:56:53.921683 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-18 04:56:53.921701 | orchestrator | Wednesday 18 March 2026 04:56:34 +0000 (0:00:00.130) 0:13:06.034 ******* 2026-03-18 04:56:53.921721 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.921739 | orchestrator | 2026-03-18 04:56:53.921757 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-18 04:56:53.921806 | orchestrator | Wednesday 18 March 2026 04:56:34 +0000 (0:00:00.142) 0:13:06.176 ******* 2026-03-18 04:56:53.921824 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-18 04:56:53.921844 | orchestrator | 2026-03-18 04:56:53.921882 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-18 04:56:53.921902 | orchestrator | Wednesday 18 March 2026 04:56:34 +0000 (0:00:00.206) 0:13:06.382 ******* 2026-03-18 04:56:53.921920 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.921940 | orchestrator | 2026-03-18 04:56:53.921959 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-18 04:56:53.921977 | orchestrator | Wednesday 18 March 2026 04:56:34 +0000 (0:00:00.171) 0:13:06.554 ******* 2026-03-18 04:56:53.921996 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.922015 | orchestrator | 2026-03-18 04:56:53.922109 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-18 04:56:53.922127 | orchestrator | Wednesday 18 March 2026 04:56:35 +0000 (0:00:00.160) 0:13:06.714 ******* 2026-03-18 04:56:53.922145 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-18 04:56:53.922164 | orchestrator | 2026-03-18 04:56:53.922183 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-18 04:56:53.922201 | orchestrator | Wednesday 18 March 2026 04:56:35 +0000 (0:00:00.530) 0:13:07.245 ******* 2026-03-18 04:56:53.922220 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.922239 | orchestrator | 2026-03-18 04:56:53.922256 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-18 04:56:53.922274 | orchestrator | Wednesday 18 March 2026 04:56:36 +0000 (0:00:01.041) 0:13:08.287 ******* 2026-03-18 04:56:53.922292 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.922310 | orchestrator | 2026-03-18 04:56:53.922327 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-18 04:56:53.922344 | orchestrator | Wednesday 18 March 2026 04:56:37 +0000 (0:00:00.985) 0:13:09.273 ******* 2026-03-18 04:56:53.922362 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.922380 | orchestrator | 2026-03-18 04:56:53.922399 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-18 04:56:53.922417 | orchestrator | Wednesday 18 March 2026 04:56:39 +0000 (0:00:01.418) 0:13:10.691 ******* 2026-03-18 04:56:53.922435 | orchestrator | changed: [testbed-node-2] 2026-03-18 04:56:53.922453 | orchestrator | 2026-03-18 04:56:53.922471 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-18 04:56:53.922489 | orchestrator | Wednesday 18 March 2026 04:56:41 +0000 (0:00:02.836) 0:13:13.528 ******* 2026-03-18 04:56:53.922508 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-18 04:56:53.922522 | orchestrator | 2026-03-18 04:56:53.922533 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-18 04:56:53.922543 | orchestrator | Wednesday 18 March 2026 04:56:42 +0000 (0:00:00.601) 0:13:14.129 ******* 2026-03-18 04:56:53.922554 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:56:53.922565 | orchestrator | 2026-03-18 04:56:53.922588 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-18 04:56:53.922599 | orchestrator | Wednesday 18 March 2026 04:56:43 +0000 (0:00:01.426) 0:13:15.556 ******* 2026-03-18 04:56:53.922610 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:56:53.922621 | orchestrator | 2026-03-18 04:56:53.922632 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-18 04:56:53.922642 | orchestrator | Wednesday 18 March 2026 04:56:45 +0000 (0:00:01.337) 0:13:16.893 ******* 2026-03-18 04:56:53.922653 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.922664 | orchestrator | 2026-03-18 04:56:53.922674 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-18 04:56:53.922705 | orchestrator | Wednesday 18 March 2026 04:56:45 +0000 (0:00:00.298) 0:13:17.191 ******* 2026-03-18 04:56:53.922716 | orchestrator | ok: [testbed-node-2] 2026-03-18 04:56:53.922727 | orchestrator | 2026-03-18 04:56:53.922738 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-18 04:56:53.922748 | orchestrator | Wednesday 18 March 2026 04:56:45 +0000 (0:00:00.158) 0:13:17.350 ******* 2026-03-18 04:56:53.922759 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-18 04:56:53.922827 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-18 04:56:53.922844 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.922855 | orchestrator | 2026-03-18 04:56:53.922866 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-18 04:56:53.922876 | orchestrator | Wednesday 18 March 2026 04:56:46 +0000 (0:00:00.957) 0:13:18.307 ******* 2026-03-18 04:56:53.922887 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-18 04:56:53.922898 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-18 04:56:53.922908 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-18 04:56:53.922919 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-18 04:56:53.922930 | orchestrator | skipping: [testbed-node-2] 2026-03-18 04:56:53.922940 | orchestrator | 2026-03-18 04:56:53.922951 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-18 04:56:53.922962 | orchestrator | 2026-03-18 04:56:53.922972 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:56:53.922983 | orchestrator | Wednesday 18 March 2026 04:56:47 +0000 (0:00:01.279) 0:13:19.587 ******* 2026-03-18 04:56:53.922994 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:56:53.923004 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:56:53.923015 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:56:53.923026 | orchestrator | 2026-03-18 04:56:53.923037 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:56:53.923047 | orchestrator | Wednesday 18 March 2026 04:56:48 +0000 (0:00:00.655) 0:13:20.243 ******* 2026-03-18 04:56:53.923058 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:56:53.923069 | orchestrator | ok: [testbed-node-4] 2026-03-18 04:56:53.923079 | orchestrator | ok: [testbed-node-5] 2026-03-18 04:56:53.923090 | orchestrator | 2026-03-18 04:56:53.923101 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-18 04:56:53.923111 | orchestrator | Wednesday 18 March 2026 04:56:49 +0000 (0:00:00.840) 0:13:21.084 ******* 2026-03-18 04:56:53.923122 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:56:53.923133 | orchestrator | 2026-03-18 04:56:53.923151 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-18 04:56:53.923161 | orchestrator | Wednesday 18 March 2026 04:56:51 +0000 (0:00:02.102) 0:13:23.186 ******* 2026-03-18 04:56:53.923172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:56:53.923183 | orchestrator | 2026-03-18 04:56:53.923194 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-18 04:56:53.923204 | orchestrator | Wednesday 18 March 2026 04:56:53 +0000 (0:00:01.860) 0:13:25.046 ******* 2026-03-18 04:56:53.923222 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-18T02:39:51.344214+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:53.923263 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-18T02:41:04.860544+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:54.394118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-18T02:41:08.355405+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '69', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:54.394377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-18T02:42:08.289221+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:54.394468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-18T02:42:14.635268+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:54.394514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-18T02:42:20.850023+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:55.186458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-18T02:42:27.185209+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '196', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:55.186585 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-18T02:42:33.064967+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:55.186650 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-18T02:42:44.590184+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '79', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '77', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:55.186672 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-18T02:43:33.692843+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '102', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 102, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:56:55.186710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-18T02:43:42.696322+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '110', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 110, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:58:15.323398 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-18T02:43:51.287738+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '208', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 208, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:58:15.323555 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-18T02:44:00.131083+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '125', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 125, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:58:15.323579 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-18T02:44:08.349044+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '131', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 131, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-18 04:58:15.323598 | orchestrator | 2026-03-18 04:58:15.323610 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-18 04:58:15.323622 | orchestrator | Wednesday 18 March 2026 04:56:55 +0000 (0:00:01.877) 0:13:26.924 ******* 2026-03-18 04:58:15.323632 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:58:15.323642 | orchestrator | 2026-03-18 04:58:15.323652 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-18 04:58:15.323661 | orchestrator | Wednesday 18 March 2026 04:56:57 +0000 (0:00:01.774) 0:13:28.698 ******* 2026-03-18 04:58:15.323671 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-18 04:58:15.323682 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-18 04:58:15.323692 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-18 04:58:15.323702 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-18 04:58:15.323713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-18 04:58:15.323722 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-18 04:58:15.323732 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-18 04:58:15.323741 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-18 04:58:15.323751 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-18 04:58:15.323760 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-18 04:58:15.323770 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-18 04:58:15.323779 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-18 04:58:15.323788 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-18 04:58:15.323798 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-18 04:58:15.323807 | orchestrator | 2026-03-18 04:58:15.323817 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-18 04:58:15.323827 | orchestrator | Wednesday 18 March 2026 04:58:10 +0000 (0:01:13.004) 0:14:41.702 ******* 2026-03-18 04:58:15.323935 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-18 04:58:22.663684 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-18 04:58:22.663815 | orchestrator | 2026-03-18 04:58:22.663967 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-18 04:58:22.663994 | orchestrator | 2026-03-18 04:58:22.664010 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 04:58:22.664026 | orchestrator | Wednesday 18 March 2026 04:58:15 +0000 (0:00:05.224) 0:14:46.927 ******* 2026-03-18 04:58:22.664042 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-18 04:58:22.664057 | orchestrator | 2026-03-18 04:58:22.664074 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 04:58:22.664090 | orchestrator | Wednesday 18 March 2026 04:58:15 +0000 (0:00:00.281) 0:14:47.208 ******* 2026-03-18 04:58:22.664107 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664124 | orchestrator | 2026-03-18 04:58:22.664141 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 04:58:22.664191 | orchestrator | Wednesday 18 March 2026 04:58:16 +0000 (0:00:00.472) 0:14:47.681 ******* 2026-03-18 04:58:22.664210 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664228 | orchestrator | 2026-03-18 04:58:22.664245 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 04:58:22.664262 | orchestrator | Wednesday 18 March 2026 04:58:16 +0000 (0:00:00.140) 0:14:47.822 ******* 2026-03-18 04:58:22.664278 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664295 | orchestrator | 2026-03-18 04:58:22.664312 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 04:58:22.664329 | orchestrator | Wednesday 18 March 2026 04:58:16 +0000 (0:00:00.730) 0:14:48.553 ******* 2026-03-18 04:58:22.664345 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664361 | orchestrator | 2026-03-18 04:58:22.664378 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 04:58:22.664394 | orchestrator | Wednesday 18 March 2026 04:58:17 +0000 (0:00:00.147) 0:14:48.700 ******* 2026-03-18 04:58:22.664410 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664426 | orchestrator | 2026-03-18 04:58:22.664444 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 04:58:22.664461 | orchestrator | Wednesday 18 March 2026 04:58:17 +0000 (0:00:00.157) 0:14:48.858 ******* 2026-03-18 04:58:22.664476 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664493 | orchestrator | 2026-03-18 04:58:22.664510 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 04:58:22.664528 | orchestrator | Wednesday 18 March 2026 04:58:17 +0000 (0:00:00.190) 0:14:49.048 ******* 2026-03-18 04:58:22.664544 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:22.664561 | orchestrator | 2026-03-18 04:58:22.664596 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 04:58:22.664615 | orchestrator | Wednesday 18 March 2026 04:58:17 +0000 (0:00:00.156) 0:14:49.205 ******* 2026-03-18 04:58:22.664632 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664649 | orchestrator | 2026-03-18 04:58:22.664665 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 04:58:22.664681 | orchestrator | Wednesday 18 March 2026 04:58:17 +0000 (0:00:00.141) 0:14:49.346 ******* 2026-03-18 04:58:22.664698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:58:22.664716 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:58:22.664733 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:58:22.664749 | orchestrator | 2026-03-18 04:58:22.664766 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 04:58:22.664783 | orchestrator | Wednesday 18 March 2026 04:58:18 +0000 (0:00:00.686) 0:14:50.033 ******* 2026-03-18 04:58:22.664799 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:22.664815 | orchestrator | 2026-03-18 04:58:22.664831 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 04:58:22.664879 | orchestrator | Wednesday 18 March 2026 04:58:18 +0000 (0:00:00.269) 0:14:50.303 ******* 2026-03-18 04:58:22.664896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:58:22.664911 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:58:22.664927 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:58:22.664942 | orchestrator | 2026-03-18 04:58:22.664958 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 04:58:22.664973 | orchestrator | Wednesday 18 March 2026 04:58:20 +0000 (0:00:02.194) 0:14:52.497 ******* 2026-03-18 04:58:22.664987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 04:58:22.665001 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 04:58:22.665013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 04:58:22.665043 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:22.665057 | orchestrator | 2026-03-18 04:58:22.665071 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 04:58:22.665085 | orchestrator | Wednesday 18 March 2026 04:58:21 +0000 (0:00:00.420) 0:14:52.918 ******* 2026-03-18 04:58:22.665101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665175 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:22.665189 | orchestrator | 2026-03-18 04:58:22.665202 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 04:58:22.665216 | orchestrator | Wednesday 18 March 2026 04:58:22 +0000 (0:00:00.959) 0:14:53.877 ******* 2026-03-18 04:58:22.665232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665249 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:22.665277 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:22.665291 | orchestrator | 2026-03-18 04:58:22.665313 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 04:58:22.665327 | orchestrator | Wednesday 18 March 2026 04:58:22 +0000 (0:00:00.174) 0:14:54.052 ******* 2026-03-18 04:58:22.665342 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 04:58:19.195985', 'end': '2026-03-18 04:58:19.248761', 'delta': '0:00:00.052776', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 04:58:22.665354 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 04:58:19.756006', 'end': '2026-03-18 04:58:19.808840', 'delta': '0:00:00.052834', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 04:58:22.665382 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 04:58:20.678289', 'end': '2026-03-18 04:58:20.728806', 'delta': '0:00:00.050517', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 04:58:27.330202 | orchestrator | 2026-03-18 04:58:27.330282 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 04:58:27.330292 | orchestrator | Wednesday 18 March 2026 04:58:22 +0000 (0:00:00.220) 0:14:54.272 ******* 2026-03-18 04:58:27.330300 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330307 | orchestrator | 2026-03-18 04:58:27.330314 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 04:58:27.330320 | orchestrator | Wednesday 18 March 2026 04:58:23 +0000 (0:00:00.964) 0:14:55.236 ******* 2026-03-18 04:58:27.330327 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330334 | orchestrator | 2026-03-18 04:58:27.330340 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 04:58:27.330347 | orchestrator | Wednesday 18 March 2026 04:58:23 +0000 (0:00:00.257) 0:14:55.494 ******* 2026-03-18 04:58:27.330353 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330359 | orchestrator | 2026-03-18 04:58:27.330365 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 04:58:27.330371 | orchestrator | Wednesday 18 March 2026 04:58:24 +0000 (0:00:00.152) 0:14:55.647 ******* 2026-03-18 04:58:27.330377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 04:58:27.330384 | orchestrator | 2026-03-18 04:58:27.330390 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:58:27.330396 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:01.061) 0:14:56.708 ******* 2026-03-18 04:58:27.330402 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330408 | orchestrator | 2026-03-18 04:58:27.330414 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 04:58:27.330420 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:00.156) 0:14:56.864 ******* 2026-03-18 04:58:27.330426 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330433 | orchestrator | 2026-03-18 04:58:27.330439 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 04:58:27.330445 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:00.139) 0:14:57.003 ******* 2026-03-18 04:58:27.330451 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330457 | orchestrator | 2026-03-18 04:58:27.330463 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 04:58:27.330469 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:00.246) 0:14:57.250 ******* 2026-03-18 04:58:27.330475 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330481 | orchestrator | 2026-03-18 04:58:27.330488 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 04:58:27.330506 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:00.132) 0:14:57.382 ******* 2026-03-18 04:58:27.330531 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330538 | orchestrator | 2026-03-18 04:58:27.330544 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 04:58:27.330550 | orchestrator | Wednesday 18 March 2026 04:58:25 +0000 (0:00:00.140) 0:14:57.523 ******* 2026-03-18 04:58:27.330556 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330562 | orchestrator | 2026-03-18 04:58:27.330568 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 04:58:27.330574 | orchestrator | Wednesday 18 March 2026 04:58:26 +0000 (0:00:00.181) 0:14:57.705 ******* 2026-03-18 04:58:27.330580 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330587 | orchestrator | 2026-03-18 04:58:27.330593 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 04:58:27.330599 | orchestrator | Wednesday 18 March 2026 04:58:26 +0000 (0:00:00.140) 0:14:57.845 ******* 2026-03-18 04:58:27.330605 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330611 | orchestrator | 2026-03-18 04:58:27.330617 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 04:58:27.330623 | orchestrator | Wednesday 18 March 2026 04:58:26 +0000 (0:00:00.174) 0:14:58.020 ******* 2026-03-18 04:58:27.330629 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.330636 | orchestrator | 2026-03-18 04:58:27.330642 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 04:58:27.330649 | orchestrator | Wednesday 18 March 2026 04:58:26 +0000 (0:00:00.127) 0:14:58.147 ******* 2026-03-18 04:58:27.330655 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:27.330661 | orchestrator | 2026-03-18 04:58:27.330667 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 04:58:27.330673 | orchestrator | Wednesday 18 March 2026 04:58:26 +0000 (0:00:00.176) 0:14:58.323 ******* 2026-03-18 04:58:27.330681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.330702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}})  2026-03-18 04:58:27.330711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:58:27.330719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}})  2026-03-18 04:58:27.330735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.330743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.330750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 04:58:27.330757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.330764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:58:27.330777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.989738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}})  2026-03-18 04:58:27.989941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}})  2026-03-18 04:58:27.989963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.989981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 04:58:27.990015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.990094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 04:58:27.990107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 04:58:27.990120 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:27.990134 | orchestrator | 2026-03-18 04:58:27.990151 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 04:58:27.990164 | orchestrator | Wednesday 18 March 2026 04:58:27 +0000 (0:00:01.060) 0:14:59.384 ******* 2026-03-18 04:58:27.990176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:27.990189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:27.990201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:27.990222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181081 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:28.181336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:37.126098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:37.126176 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:37.126184 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 04:58:37.126189 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126194 | orchestrator | 2026-03-18 04:58:37.126199 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 04:58:37.126203 | orchestrator | Wednesday 18 March 2026 04:58:28 +0000 (0:00:00.404) 0:14:59.788 ******* 2026-03-18 04:58:37.126207 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:37.126224 | orchestrator | 2026-03-18 04:58:37.126228 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 04:58:37.126232 | orchestrator | Wednesday 18 March 2026 04:58:28 +0000 (0:00:00.504) 0:15:00.293 ******* 2026-03-18 04:58:37.126236 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:37.126239 | orchestrator | 2026-03-18 04:58:37.126243 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:58:37.126247 | orchestrator | Wednesday 18 March 2026 04:58:28 +0000 (0:00:00.140) 0:15:00.433 ******* 2026-03-18 04:58:37.126251 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:37.126254 | orchestrator | 2026-03-18 04:58:37.126258 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:58:37.126262 | orchestrator | Wednesday 18 March 2026 04:58:29 +0000 (0:00:00.495) 0:15:00.928 ******* 2026-03-18 04:58:37.126265 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126269 | orchestrator | 2026-03-18 04:58:37.126273 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 04:58:37.126277 | orchestrator | Wednesday 18 March 2026 04:58:29 +0000 (0:00:00.124) 0:15:01.053 ******* 2026-03-18 04:58:37.126280 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126284 | orchestrator | 2026-03-18 04:58:37.126288 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 04:58:37.126292 | orchestrator | Wednesday 18 March 2026 04:58:29 +0000 (0:00:00.260) 0:15:01.314 ******* 2026-03-18 04:58:37.126295 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126299 | orchestrator | 2026-03-18 04:58:37.126303 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 04:58:37.126306 | orchestrator | Wednesday 18 March 2026 04:58:29 +0000 (0:00:00.147) 0:15:01.461 ******* 2026-03-18 04:58:37.126310 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 04:58:37.126314 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 04:58:37.126318 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 04:58:37.126322 | orchestrator | 2026-03-18 04:58:37.126326 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 04:58:37.126329 | orchestrator | Wednesday 18 March 2026 04:58:30 +0000 (0:00:01.067) 0:15:02.529 ******* 2026-03-18 04:58:37.126333 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 04:58:37.126337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 04:58:37.126341 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 04:58:37.126345 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126348 | orchestrator | 2026-03-18 04:58:37.126353 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 04:58:37.126356 | orchestrator | Wednesday 18 March 2026 04:58:31 +0000 (0:00:00.178) 0:15:02.708 ******* 2026-03-18 04:58:37.126372 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-18 04:58:37.126376 | orchestrator | 2026-03-18 04:58:37.126381 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:58:37.126386 | orchestrator | Wednesday 18 March 2026 04:58:31 +0000 (0:00:00.242) 0:15:02.950 ******* 2026-03-18 04:58:37.126389 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126393 | orchestrator | 2026-03-18 04:58:37.126397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:58:37.126400 | orchestrator | Wednesday 18 March 2026 04:58:31 +0000 (0:00:00.485) 0:15:03.436 ******* 2026-03-18 04:58:37.126404 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126408 | orchestrator | 2026-03-18 04:58:37.126412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:58:37.126415 | orchestrator | Wednesday 18 March 2026 04:58:31 +0000 (0:00:00.166) 0:15:03.602 ******* 2026-03-18 04:58:37.126419 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126426 | orchestrator | 2026-03-18 04:58:37.126430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:58:37.126434 | orchestrator | Wednesday 18 March 2026 04:58:32 +0000 (0:00:00.166) 0:15:03.769 ******* 2026-03-18 04:58:37.126438 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:37.126441 | orchestrator | 2026-03-18 04:58:37.126445 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:58:37.126449 | orchestrator | Wednesday 18 March 2026 04:58:32 +0000 (0:00:00.271) 0:15:04.040 ******* 2026-03-18 04:58:37.126453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:58:37.126456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:58:37.126460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:58:37.126464 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126468 | orchestrator | 2026-03-18 04:58:37.126472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:58:37.126475 | orchestrator | Wednesday 18 March 2026 04:58:32 +0000 (0:00:00.472) 0:15:04.513 ******* 2026-03-18 04:58:37.126479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:58:37.126484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:58:37.126490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:58:37.126497 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126502 | orchestrator | 2026-03-18 04:58:37.126508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:58:37.126514 | orchestrator | Wednesday 18 March 2026 04:58:33 +0000 (0:00:00.437) 0:15:04.950 ******* 2026-03-18 04:58:37.126520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:58:37.126525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:58:37.126530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:58:37.126536 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:37.126542 | orchestrator | 2026-03-18 04:58:37.126547 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:58:37.126553 | orchestrator | Wednesday 18 March 2026 04:58:33 +0000 (0:00:00.404) 0:15:05.355 ******* 2026-03-18 04:58:37.126559 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:37.126564 | orchestrator | 2026-03-18 04:58:37.126570 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:58:37.126576 | orchestrator | Wednesday 18 March 2026 04:58:33 +0000 (0:00:00.156) 0:15:05.511 ******* 2026-03-18 04:58:37.126582 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 04:58:37.126588 | orchestrator | 2026-03-18 04:58:37.126593 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 04:58:37.126599 | orchestrator | Wednesday 18 March 2026 04:58:34 +0000 (0:00:00.348) 0:15:05.859 ******* 2026-03-18 04:58:37.126605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:58:37.126611 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:58:37.126617 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:58:37.126623 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 04:58:37.126629 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:58:37.126636 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:58:37.126641 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:58:37.126646 | orchestrator | 2026-03-18 04:58:37.126650 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 04:58:37.126655 | orchestrator | Wednesday 18 March 2026 04:58:35 +0000 (0:00:01.198) 0:15:07.058 ******* 2026-03-18 04:58:37.126659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:58:37.126668 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:58:37.126673 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:58:37.126677 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 04:58:37.126681 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 04:58:37.126686 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 04:58:37.126690 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 04:58:37.126694 | orchestrator | 2026-03-18 04:58:37.126703 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-18 04:58:51.956309 | orchestrator | Wednesday 18 March 2026 04:58:37 +0000 (0:00:01.671) 0:15:08.730 ******* 2026-03-18 04:58:51.956424 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956440 | orchestrator | 2026-03-18 04:58:51.956453 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-18 04:58:51.956464 | orchestrator | Wednesday 18 March 2026 04:58:37 +0000 (0:00:00.481) 0:15:09.211 ******* 2026-03-18 04:58:51.956475 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956486 | orchestrator | 2026-03-18 04:58:51.956497 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-18 04:58:51.956508 | orchestrator | Wednesday 18 March 2026 04:58:37 +0000 (0:00:00.166) 0:15:09.378 ******* 2026-03-18 04:58:51.956519 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956530 | orchestrator | 2026-03-18 04:58:51.956541 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-18 04:58:51.956551 | orchestrator | Wednesday 18 March 2026 04:58:38 +0000 (0:00:00.877) 0:15:10.255 ******* 2026-03-18 04:58:51.956562 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-18 04:58:51.956575 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-18 04:58:51.956586 | orchestrator | 2026-03-18 04:58:51.956597 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 04:58:51.956608 | orchestrator | Wednesday 18 March 2026 04:58:41 +0000 (0:00:03.062) 0:15:13.318 ******* 2026-03-18 04:58:51.956619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-18 04:58:51.956630 | orchestrator | 2026-03-18 04:58:51.956641 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 04:58:51.956652 | orchestrator | Wednesday 18 March 2026 04:58:41 +0000 (0:00:00.195) 0:15:13.514 ******* 2026-03-18 04:58:51.956663 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-18 04:58:51.956674 | orchestrator | 2026-03-18 04:58:51.956684 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 04:58:51.956695 | orchestrator | Wednesday 18 March 2026 04:58:42 +0000 (0:00:00.224) 0:15:13.738 ******* 2026-03-18 04:58:51.956737 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.956754 | orchestrator | 2026-03-18 04:58:51.956772 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 04:58:51.956790 | orchestrator | Wednesday 18 March 2026 04:58:42 +0000 (0:00:00.123) 0:15:13.862 ******* 2026-03-18 04:58:51.956809 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956827 | orchestrator | 2026-03-18 04:58:51.956847 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 04:58:51.956880 | orchestrator | Wednesday 18 March 2026 04:58:42 +0000 (0:00:00.520) 0:15:14.382 ******* 2026-03-18 04:58:51.956892 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956903 | orchestrator | 2026-03-18 04:58:51.956913 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 04:58:51.956924 | orchestrator | Wednesday 18 March 2026 04:58:43 +0000 (0:00:00.562) 0:15:14.945 ******* 2026-03-18 04:58:51.956935 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.956946 | orchestrator | 2026-03-18 04:58:51.956983 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 04:58:51.956994 | orchestrator | Wednesday 18 March 2026 04:58:43 +0000 (0:00:00.547) 0:15:15.492 ******* 2026-03-18 04:58:51.957005 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957015 | orchestrator | 2026-03-18 04:58:51.957026 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 04:58:51.957037 | orchestrator | Wednesday 18 March 2026 04:58:44 +0000 (0:00:00.142) 0:15:15.634 ******* 2026-03-18 04:58:51.957048 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957058 | orchestrator | 2026-03-18 04:58:51.957069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 04:58:51.957080 | orchestrator | Wednesday 18 March 2026 04:58:44 +0000 (0:00:00.132) 0:15:15.767 ******* 2026-03-18 04:58:51.957091 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957101 | orchestrator | 2026-03-18 04:58:51.957112 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 04:58:51.957123 | orchestrator | Wednesday 18 March 2026 04:58:44 +0000 (0:00:00.132) 0:15:15.900 ******* 2026-03-18 04:58:51.957133 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957144 | orchestrator | 2026-03-18 04:58:51.957155 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 04:58:51.957165 | orchestrator | Wednesday 18 March 2026 04:58:45 +0000 (0:00:00.826) 0:15:16.726 ******* 2026-03-18 04:58:51.957176 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957187 | orchestrator | 2026-03-18 04:58:51.957197 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 04:58:51.957208 | orchestrator | Wednesday 18 March 2026 04:58:45 +0000 (0:00:00.536) 0:15:17.263 ******* 2026-03-18 04:58:51.957219 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957229 | orchestrator | 2026-03-18 04:58:51.957240 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 04:58:51.957251 | orchestrator | Wednesday 18 March 2026 04:58:45 +0000 (0:00:00.136) 0:15:17.399 ******* 2026-03-18 04:58:51.957261 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957272 | orchestrator | 2026-03-18 04:58:51.957282 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 04:58:51.957293 | orchestrator | Wednesday 18 March 2026 04:58:45 +0000 (0:00:00.144) 0:15:17.543 ******* 2026-03-18 04:58:51.957304 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957314 | orchestrator | 2026-03-18 04:58:51.957325 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 04:58:51.957335 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.141) 0:15:17.685 ******* 2026-03-18 04:58:51.957346 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957357 | orchestrator | 2026-03-18 04:58:51.957367 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 04:58:51.957378 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.159) 0:15:17.844 ******* 2026-03-18 04:58:51.957389 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957399 | orchestrator | 2026-03-18 04:58:51.957435 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 04:58:51.957447 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.165) 0:15:18.010 ******* 2026-03-18 04:58:51.957458 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957468 | orchestrator | 2026-03-18 04:58:51.957479 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 04:58:51.957490 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.147) 0:15:18.157 ******* 2026-03-18 04:58:51.957500 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957511 | orchestrator | 2026-03-18 04:58:51.957521 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 04:58:51.957532 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.136) 0:15:18.293 ******* 2026-03-18 04:58:51.957542 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957553 | orchestrator | 2026-03-18 04:58:51.957572 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 04:58:51.957583 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.138) 0:15:18.432 ******* 2026-03-18 04:58:51.957594 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957604 | orchestrator | 2026-03-18 04:58:51.957615 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 04:58:51.957626 | orchestrator | Wednesday 18 March 2026 04:58:46 +0000 (0:00:00.160) 0:15:18.592 ******* 2026-03-18 04:58:51.957636 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.957647 | orchestrator | 2026-03-18 04:58:51.957658 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 04:58:51.957668 | orchestrator | Wednesday 18 March 2026 04:58:47 +0000 (0:00:00.237) 0:15:18.829 ******* 2026-03-18 04:58:51.957679 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957689 | orchestrator | 2026-03-18 04:58:51.957700 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 04:58:51.957711 | orchestrator | Wednesday 18 March 2026 04:58:47 +0000 (0:00:00.421) 0:15:19.251 ******* 2026-03-18 04:58:51.957721 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957732 | orchestrator | 2026-03-18 04:58:51.957742 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 04:58:51.957753 | orchestrator | Wednesday 18 March 2026 04:58:47 +0000 (0:00:00.148) 0:15:19.399 ******* 2026-03-18 04:58:51.957763 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957774 | orchestrator | 2026-03-18 04:58:51.957784 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 04:58:51.957794 | orchestrator | Wednesday 18 March 2026 04:58:47 +0000 (0:00:00.137) 0:15:19.537 ******* 2026-03-18 04:58:51.957805 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957815 | orchestrator | 2026-03-18 04:58:51.957826 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 04:58:51.957837 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.133) 0:15:19.670 ******* 2026-03-18 04:58:51.957847 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957858 | orchestrator | 2026-03-18 04:58:51.957897 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 04:58:51.957907 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.146) 0:15:19.816 ******* 2026-03-18 04:58:51.957918 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957928 | orchestrator | 2026-03-18 04:58:51.957939 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 04:58:51.957949 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.128) 0:15:19.944 ******* 2026-03-18 04:58:51.957960 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.957970 | orchestrator | 2026-03-18 04:58:51.957981 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 04:58:51.957992 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.152) 0:15:20.097 ******* 2026-03-18 04:58:51.958003 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.958013 | orchestrator | 2026-03-18 04:58:51.958145 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 04:58:51.958157 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.137) 0:15:20.235 ******* 2026-03-18 04:58:51.958168 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.958178 | orchestrator | 2026-03-18 04:58:51.958189 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 04:58:51.958200 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.137) 0:15:20.372 ******* 2026-03-18 04:58:51.958211 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.958221 | orchestrator | 2026-03-18 04:58:51.958232 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 04:58:51.958243 | orchestrator | Wednesday 18 March 2026 04:58:48 +0000 (0:00:00.142) 0:15:20.515 ******* 2026-03-18 04:58:51.958253 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.958273 | orchestrator | 2026-03-18 04:58:51.958284 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 04:58:51.958295 | orchestrator | Wednesday 18 March 2026 04:58:49 +0000 (0:00:00.152) 0:15:20.667 ******* 2026-03-18 04:58:51.958306 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:58:51.958316 | orchestrator | 2026-03-18 04:58:51.958327 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 04:58:51.958337 | orchestrator | Wednesday 18 March 2026 04:58:49 +0000 (0:00:00.208) 0:15:20.876 ******* 2026-03-18 04:58:51.958348 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.958359 | orchestrator | 2026-03-18 04:58:51.958370 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 04:58:51.958380 | orchestrator | Wednesday 18 March 2026 04:58:50 +0000 (0:00:01.248) 0:15:22.124 ******* 2026-03-18 04:58:51.958391 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:58:51.958402 | orchestrator | 2026-03-18 04:58:51.958412 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 04:58:51.958423 | orchestrator | Wednesday 18 March 2026 04:58:51 +0000 (0:00:01.206) 0:15:23.330 ******* 2026-03-18 04:58:51.958434 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-18 04:58:51.958444 | orchestrator | 2026-03-18 04:58:51.958470 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 04:59:08.031062 | orchestrator | Wednesday 18 March 2026 04:58:51 +0000 (0:00:00.229) 0:15:23.560 ******* 2026-03-18 04:59:08.031208 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.031238 | orchestrator | 2026-03-18 04:59:08.031258 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 04:59:08.031278 | orchestrator | Wednesday 18 March 2026 04:58:52 +0000 (0:00:00.157) 0:15:23.717 ******* 2026-03-18 04:59:08.031297 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.031316 | orchestrator | 2026-03-18 04:59:08.031335 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 04:59:08.031353 | orchestrator | Wednesday 18 March 2026 04:58:52 +0000 (0:00:00.145) 0:15:23.863 ******* 2026-03-18 04:59:08.031372 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 04:59:08.031392 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 04:59:08.031413 | orchestrator | 2026-03-18 04:59:08.031431 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 04:59:08.031449 | orchestrator | Wednesday 18 March 2026 04:58:53 +0000 (0:00:00.808) 0:15:24.671 ******* 2026-03-18 04:59:08.031467 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:08.031487 | orchestrator | 2026-03-18 04:59:08.031506 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 04:59:08.031527 | orchestrator | Wednesday 18 March 2026 04:58:53 +0000 (0:00:00.459) 0:15:25.131 ******* 2026-03-18 04:59:08.031545 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.031564 | orchestrator | 2026-03-18 04:59:08.031584 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 04:59:08.031605 | orchestrator | Wednesday 18 March 2026 04:58:53 +0000 (0:00:00.165) 0:15:25.296 ******* 2026-03-18 04:59:08.031625 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.031643 | orchestrator | 2026-03-18 04:59:08.031662 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 04:59:08.031681 | orchestrator | Wednesday 18 March 2026 04:58:53 +0000 (0:00:00.155) 0:15:25.452 ******* 2026-03-18 04:59:08.031700 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.031719 | orchestrator | 2026-03-18 04:59:08.031738 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 04:59:08.031757 | orchestrator | Wednesday 18 March 2026 04:58:53 +0000 (0:00:00.139) 0:15:25.591 ******* 2026-03-18 04:59:08.031775 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-18 04:59:08.031827 | orchestrator | 2026-03-18 04:59:08.031848 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 04:59:08.031866 | orchestrator | Wednesday 18 March 2026 04:58:54 +0000 (0:00:00.244) 0:15:25.836 ******* 2026-03-18 04:59:08.031913 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:08.031932 | orchestrator | 2026-03-18 04:59:08.031952 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 04:59:08.031971 | orchestrator | Wednesday 18 March 2026 04:58:54 +0000 (0:00:00.713) 0:15:26.550 ******* 2026-03-18 04:59:08.031989 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 04:59:08.032007 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 04:59:08.032025 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 04:59:08.032044 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032063 | orchestrator | 2026-03-18 04:59:08.032081 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 04:59:08.032099 | orchestrator | Wednesday 18 March 2026 04:58:55 +0000 (0:00:00.467) 0:15:27.017 ******* 2026-03-18 04:59:08.032117 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032136 | orchestrator | 2026-03-18 04:59:08.032155 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 04:59:08.032174 | orchestrator | Wednesday 18 March 2026 04:58:55 +0000 (0:00:00.145) 0:15:27.163 ******* 2026-03-18 04:59:08.032192 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032210 | orchestrator | 2026-03-18 04:59:08.032228 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 04:59:08.032247 | orchestrator | Wednesday 18 March 2026 04:58:55 +0000 (0:00:00.178) 0:15:27.342 ******* 2026-03-18 04:59:08.032267 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032286 | orchestrator | 2026-03-18 04:59:08.032304 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 04:59:08.032322 | orchestrator | Wednesday 18 March 2026 04:58:55 +0000 (0:00:00.164) 0:15:27.507 ******* 2026-03-18 04:59:08.032341 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032359 | orchestrator | 2026-03-18 04:59:08.032379 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 04:59:08.032396 | orchestrator | Wednesday 18 March 2026 04:58:56 +0000 (0:00:00.169) 0:15:27.676 ******* 2026-03-18 04:59:08.032415 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032433 | orchestrator | 2026-03-18 04:59:08.032451 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 04:59:08.032470 | orchestrator | Wednesday 18 March 2026 04:58:56 +0000 (0:00:00.153) 0:15:27.829 ******* 2026-03-18 04:59:08.032489 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:08.032507 | orchestrator | 2026-03-18 04:59:08.032525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 04:59:08.032543 | orchestrator | Wednesday 18 March 2026 04:58:57 +0000 (0:00:01.471) 0:15:29.301 ******* 2026-03-18 04:59:08.032562 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:08.032582 | orchestrator | 2026-03-18 04:59:08.032600 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 04:59:08.032618 | orchestrator | Wednesday 18 March 2026 04:58:57 +0000 (0:00:00.145) 0:15:29.447 ******* 2026-03-18 04:59:08.032636 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-18 04:59:08.032654 | orchestrator | 2026-03-18 04:59:08.032717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 04:59:08.032738 | orchestrator | Wednesday 18 March 2026 04:58:58 +0000 (0:00:00.247) 0:15:29.694 ******* 2026-03-18 04:59:08.032757 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032776 | orchestrator | 2026-03-18 04:59:08.032795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 04:59:08.032813 | orchestrator | Wednesday 18 March 2026 04:58:58 +0000 (0:00:00.154) 0:15:29.849 ******* 2026-03-18 04:59:08.032844 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032863 | orchestrator | 2026-03-18 04:59:08.032917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 04:59:08.032936 | orchestrator | Wednesday 18 March 2026 04:58:58 +0000 (0:00:00.152) 0:15:30.002 ******* 2026-03-18 04:59:08.032954 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.032973 | orchestrator | 2026-03-18 04:59:08.032992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 04:59:08.033011 | orchestrator | Wednesday 18 March 2026 04:58:58 +0000 (0:00:00.149) 0:15:30.151 ******* 2026-03-18 04:59:08.033029 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.033047 | orchestrator | 2026-03-18 04:59:08.033065 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 04:59:08.033084 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.472) 0:15:30.624 ******* 2026-03-18 04:59:08.033103 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.033122 | orchestrator | 2026-03-18 04:59:08.033140 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 04:59:08.033158 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.196) 0:15:30.821 ******* 2026-03-18 04:59:08.033177 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.033195 | orchestrator | 2026-03-18 04:59:08.033214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 04:59:08.033232 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.159) 0:15:30.980 ******* 2026-03-18 04:59:08.033250 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.033268 | orchestrator | 2026-03-18 04:59:08.033287 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 04:59:08.033305 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.167) 0:15:31.148 ******* 2026-03-18 04:59:08.033325 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:08.033343 | orchestrator | 2026-03-18 04:59:08.033361 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 04:59:08.033379 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.152) 0:15:31.300 ******* 2026-03-18 04:59:08.033398 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:08.033417 | orchestrator | 2026-03-18 04:59:08.033436 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 04:59:08.033454 | orchestrator | Wednesday 18 March 2026 04:58:59 +0000 (0:00:00.248) 0:15:31.549 ******* 2026-03-18 04:59:08.033472 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-18 04:59:08.033490 | orchestrator | 2026-03-18 04:59:08.033510 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 04:59:08.033529 | orchestrator | Wednesday 18 March 2026 04:59:00 +0000 (0:00:00.223) 0:15:31.772 ******* 2026-03-18 04:59:08.033547 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-18 04:59:08.033565 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-18 04:59:08.033583 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-18 04:59:08.033602 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-18 04:59:08.033620 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-18 04:59:08.033639 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-18 04:59:08.033656 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-18 04:59:08.033674 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-18 04:59:08.033693 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 04:59:08.033711 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 04:59:08.033730 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 04:59:08.033749 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 04:59:08.033767 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 04:59:08.033794 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 04:59:08.033814 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-18 04:59:08.033833 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-18 04:59:08.033852 | orchestrator | 2026-03-18 04:59:08.033869 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 04:59:08.033956 | orchestrator | Wednesday 18 March 2026 04:59:05 +0000 (0:00:05.469) 0:15:37.242 ******* 2026-03-18 04:59:08.033973 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-18 04:59:08.033989 | orchestrator | 2026-03-18 04:59:08.034006 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 04:59:08.034095 | orchestrator | Wednesday 18 March 2026 04:59:06 +0000 (0:00:00.596) 0:15:37.839 ******* 2026-03-18 04:59:08.034114 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 04:59:08.034132 | orchestrator | 2026-03-18 04:59:08.034150 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 04:59:08.034166 | orchestrator | Wednesday 18 March 2026 04:59:06 +0000 (0:00:00.506) 0:15:38.345 ******* 2026-03-18 04:59:08.034221 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 04:59:08.034240 | orchestrator | 2026-03-18 04:59:08.034268 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 04:59:26.812666 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:01.288) 0:15:39.634 ******* 2026-03-18 04:59:26.812782 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.812801 | orchestrator | 2026-03-18 04:59:26.812815 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 04:59:26.812826 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.139) 0:15:39.773 ******* 2026-03-18 04:59:26.812837 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.812848 | orchestrator | 2026-03-18 04:59:26.812860 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 04:59:26.812871 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.143) 0:15:39.917 ******* 2026-03-18 04:59:26.812882 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.812945 | orchestrator | 2026-03-18 04:59:26.812958 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 04:59:26.813074 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.151) 0:15:40.069 ******* 2026-03-18 04:59:26.813095 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813106 | orchestrator | 2026-03-18 04:59:26.813117 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 04:59:26.813128 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.142) 0:15:40.211 ******* 2026-03-18 04:59:26.813138 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813149 | orchestrator | 2026-03-18 04:59:26.813160 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 04:59:26.813172 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.143) 0:15:40.355 ******* 2026-03-18 04:59:26.813183 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813194 | orchestrator | 2026-03-18 04:59:26.813205 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 04:59:26.813216 | orchestrator | Wednesday 18 March 2026 04:59:08 +0000 (0:00:00.144) 0:15:40.499 ******* 2026-03-18 04:59:26.813227 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813240 | orchestrator | 2026-03-18 04:59:26.813259 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 04:59:26.813277 | orchestrator | Wednesday 18 March 2026 04:59:09 +0000 (0:00:00.138) 0:15:40.638 ******* 2026-03-18 04:59:26.813294 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813340 | orchestrator | 2026-03-18 04:59:26.813357 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 04:59:26.813374 | orchestrator | Wednesday 18 March 2026 04:59:09 +0000 (0:00:00.144) 0:15:40.783 ******* 2026-03-18 04:59:26.813392 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813409 | orchestrator | 2026-03-18 04:59:26.813426 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 04:59:26.813443 | orchestrator | Wednesday 18 March 2026 04:59:09 +0000 (0:00:00.138) 0:15:40.921 ******* 2026-03-18 04:59:26.813461 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813479 | orchestrator | 2026-03-18 04:59:26.813496 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 04:59:26.813511 | orchestrator | Wednesday 18 March 2026 04:59:09 +0000 (0:00:00.160) 0:15:41.081 ******* 2026-03-18 04:59:26.813526 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.813545 | orchestrator | 2026-03-18 04:59:26.813563 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 04:59:26.813582 | orchestrator | Wednesday 18 March 2026 04:59:09 +0000 (0:00:00.203) 0:15:41.285 ******* 2026-03-18 04:59:26.813600 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-18 04:59:26.813617 | orchestrator | 2026-03-18 04:59:26.813634 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 04:59:26.813652 | orchestrator | Wednesday 18 March 2026 04:59:13 +0000 (0:00:03.361) 0:15:44.647 ******* 2026-03-18 04:59:26.813667 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 04:59:26.813684 | orchestrator | 2026-03-18 04:59:26.813699 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 04:59:26.813715 | orchestrator | Wednesday 18 March 2026 04:59:13 +0000 (0:00:00.508) 0:15:45.156 ******* 2026-03-18 04:59:26.813735 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-18 04:59:26.813756 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-18 04:59:26.813774 | orchestrator | 2026-03-18 04:59:26.813790 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 04:59:26.813805 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:06.466) 0:15:51.622 ******* 2026-03-18 04:59:26.813822 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813837 | orchestrator | 2026-03-18 04:59:26.813855 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 04:59:26.813871 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:00.134) 0:15:51.756 ******* 2026-03-18 04:59:26.813916 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.813935 | orchestrator | 2026-03-18 04:59:26.813989 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 04:59:26.814010 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:00.134) 0:15:51.891 ******* 2026-03-18 04:59:26.814124 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814143 | orchestrator | 2026-03-18 04:59:26.814161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 04:59:26.814181 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:00.161) 0:15:52.053 ******* 2026-03-18 04:59:26.814200 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814234 | orchestrator | 2026-03-18 04:59:26.814246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 04:59:26.814256 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:00.179) 0:15:52.232 ******* 2026-03-18 04:59:26.814268 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814278 | orchestrator | 2026-03-18 04:59:26.814289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 04:59:26.814300 | orchestrator | Wednesday 18 March 2026 04:59:20 +0000 (0:00:00.160) 0:15:52.393 ******* 2026-03-18 04:59:26.814311 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.814322 | orchestrator | 2026-03-18 04:59:26.814332 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 04:59:26.814343 | orchestrator | Wednesday 18 March 2026 04:59:21 +0000 (0:00:00.254) 0:15:52.647 ******* 2026-03-18 04:59:26.814354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:59:26.814365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:59:26.814376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:59:26.814386 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814397 | orchestrator | 2026-03-18 04:59:26.814408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 04:59:26.814418 | orchestrator | Wednesday 18 March 2026 04:59:21 +0000 (0:00:00.458) 0:15:53.106 ******* 2026-03-18 04:59:26.814429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:59:26.814440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:59:26.814450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:59:26.814461 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814472 | orchestrator | 2026-03-18 04:59:26.814483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 04:59:26.814493 | orchestrator | Wednesday 18 March 2026 04:59:21 +0000 (0:00:00.449) 0:15:53.556 ******* 2026-03-18 04:59:26.814504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 04:59:26.814514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 04:59:26.814525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 04:59:26.814536 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814547 | orchestrator | 2026-03-18 04:59:26.814557 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 04:59:26.814568 | orchestrator | Wednesday 18 March 2026 04:59:22 +0000 (0:00:00.500) 0:15:54.056 ******* 2026-03-18 04:59:26.814579 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.814590 | orchestrator | 2026-03-18 04:59:26.814600 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 04:59:26.814611 | orchestrator | Wednesday 18 March 2026 04:59:22 +0000 (0:00:00.175) 0:15:54.232 ******* 2026-03-18 04:59:26.814622 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 04:59:26.814632 | orchestrator | 2026-03-18 04:59:26.814643 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 04:59:26.814654 | orchestrator | Wednesday 18 March 2026 04:59:23 +0000 (0:00:01.089) 0:15:55.322 ******* 2026-03-18 04:59:26.814665 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.814675 | orchestrator | 2026-03-18 04:59:26.814686 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-18 04:59:26.814697 | orchestrator | Wednesday 18 March 2026 04:59:24 +0000 (0:00:00.836) 0:15:56.158 ******* 2026-03-18 04:59:26.814707 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.814718 | orchestrator | 2026-03-18 04:59:26.814729 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-18 04:59:26.814740 | orchestrator | Wednesday 18 March 2026 04:59:24 +0000 (0:00:00.178) 0:15:56.336 ******* 2026-03-18 04:59:26.814751 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 04:59:26.814762 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 04:59:26.814779 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 04:59:26.814790 | orchestrator | 2026-03-18 04:59:26.814801 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-18 04:59:26.814811 | orchestrator | Wednesday 18 March 2026 04:59:25 +0000 (0:00:00.710) 0:15:57.047 ******* 2026-03-18 04:59:26.814822 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-18 04:59:26.814833 | orchestrator | 2026-03-18 04:59:26.814843 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-18 04:59:26.814854 | orchestrator | Wednesday 18 March 2026 04:59:26 +0000 (0:00:00.612) 0:15:57.659 ******* 2026-03-18 04:59:26.814865 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814876 | orchestrator | 2026-03-18 04:59:26.814914 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-18 04:59:26.814927 | orchestrator | Wednesday 18 March 2026 04:59:26 +0000 (0:00:00.156) 0:15:57.816 ******* 2026-03-18 04:59:26.814938 | orchestrator | skipping: [testbed-node-3] 2026-03-18 04:59:26.814948 | orchestrator | 2026-03-18 04:59:26.814959 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-18 04:59:26.814970 | orchestrator | Wednesday 18 March 2026 04:59:26 +0000 (0:00:00.140) 0:15:57.957 ******* 2026-03-18 04:59:26.814981 | orchestrator | ok: [testbed-node-3] 2026-03-18 04:59:26.814991 | orchestrator | 2026-03-18 04:59:26.815020 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-18 05:00:11.293077 | orchestrator | Wednesday 18 March 2026 04:59:26 +0000 (0:00:00.459) 0:15:58.416 ******* 2026-03-18 05:00:11.293180 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.293204 | orchestrator | 2026-03-18 05:00:11.293226 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-18 05:00:11.293246 | orchestrator | Wednesday 18 March 2026 04:59:26 +0000 (0:00:00.169) 0:15:58.586 ******* 2026-03-18 05:00:11.293262 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 05:00:11.293275 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 05:00:11.293295 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 05:00:11.293314 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 05:00:11.293332 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 05:00:11.293351 | orchestrator | 2026-03-18 05:00:11.293369 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-18 05:00:11.293389 | orchestrator | Wednesday 18 March 2026 04:59:29 +0000 (0:00:02.084) 0:16:00.671 ******* 2026-03-18 05:00:11.293408 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.293420 | orchestrator | 2026-03-18 05:00:11.293431 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-18 05:00:11.293442 | orchestrator | Wednesday 18 March 2026 04:59:29 +0000 (0:00:00.156) 0:16:00.827 ******* 2026-03-18 05:00:11.293453 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-18 05:00:11.293464 | orchestrator | 2026-03-18 05:00:11.293475 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-18 05:00:11.293486 | orchestrator | Wednesday 18 March 2026 04:59:30 +0000 (0:00:00.872) 0:16:01.700 ******* 2026-03-18 05:00:11.293497 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 05:00:11.293508 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-18 05:00:11.293518 | orchestrator | 2026-03-18 05:00:11.293529 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-18 05:00:11.293540 | orchestrator | Wednesday 18 March 2026 04:59:30 +0000 (0:00:00.875) 0:16:02.576 ******* 2026-03-18 05:00:11.293550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:00:11.293585 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 05:00:11.293597 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:00:11.293608 | orchestrator | 2026-03-18 05:00:11.293619 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:00:11.293632 | orchestrator | Wednesday 18 March 2026 04:59:33 +0000 (0:00:02.146) 0:16:04.722 ******* 2026-03-18 05:00:11.293645 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-18 05:00:11.293658 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 05:00:11.293670 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.293683 | orchestrator | 2026-03-18 05:00:11.293700 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-18 05:00:11.293721 | orchestrator | Wednesday 18 March 2026 04:59:34 +0000 (0:00:00.962) 0:16:05.684 ******* 2026-03-18 05:00:11.293743 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.293762 | orchestrator | 2026-03-18 05:00:11.293776 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-18 05:00:11.293789 | orchestrator | Wednesday 18 March 2026 04:59:34 +0000 (0:00:00.249) 0:16:05.934 ******* 2026-03-18 05:00:11.293802 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.293814 | orchestrator | 2026-03-18 05:00:11.293828 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-18 05:00:11.293840 | orchestrator | Wednesday 18 March 2026 04:59:34 +0000 (0:00:00.147) 0:16:06.082 ******* 2026-03-18 05:00:11.293853 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.293865 | orchestrator | 2026-03-18 05:00:11.293878 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-18 05:00:11.293891 | orchestrator | Wednesday 18 March 2026 04:59:34 +0000 (0:00:00.158) 0:16:06.241 ******* 2026-03-18 05:00:11.293903 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-18 05:00:11.293916 | orchestrator | 2026-03-18 05:00:11.293959 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-18 05:00:11.293979 | orchestrator | Wednesday 18 March 2026 04:59:35 +0000 (0:00:00.618) 0:16:06.860 ******* 2026-03-18 05:00:11.293999 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.294078 | orchestrator | 2026-03-18 05:00:11.294093 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-18 05:00:11.294104 | orchestrator | Wednesday 18 March 2026 04:59:35 +0000 (0:00:00.473) 0:16:07.333 ******* 2026-03-18 05:00:11.294114 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.294125 | orchestrator | 2026-03-18 05:00:11.294136 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-18 05:00:11.294146 | orchestrator | Wednesday 18 March 2026 04:59:38 +0000 (0:00:02.522) 0:16:09.855 ******* 2026-03-18 05:00:11.294157 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-18 05:00:11.294167 | orchestrator | 2026-03-18 05:00:11.294178 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-18 05:00:11.294189 | orchestrator | Wednesday 18 March 2026 04:59:38 +0000 (0:00:00.568) 0:16:10.424 ******* 2026-03-18 05:00:11.294199 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.294210 | orchestrator | 2026-03-18 05:00:11.294221 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-18 05:00:11.294232 | orchestrator | Wednesday 18 March 2026 04:59:40 +0000 (0:00:01.280) 0:16:11.704 ******* 2026-03-18 05:00:11.294243 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.294253 | orchestrator | 2026-03-18 05:00:11.294277 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-18 05:00:11.294308 | orchestrator | Wednesday 18 March 2026 04:59:41 +0000 (0:00:00.962) 0:16:12.666 ******* 2026-03-18 05:00:11.294328 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:00:11.294348 | orchestrator | 2026-03-18 05:00:11.294368 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-18 05:00:11.294380 | orchestrator | Wednesday 18 March 2026 04:59:42 +0000 (0:00:01.195) 0:16:13.862 ******* 2026-03-18 05:00:11.294402 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.294415 | orchestrator | 2026-03-18 05:00:11.294434 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-18 05:00:11.294453 | orchestrator | Wednesday 18 March 2026 04:59:42 +0000 (0:00:00.162) 0:16:14.025 ******* 2026-03-18 05:00:11.294471 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.294490 | orchestrator | 2026-03-18 05:00:11.294509 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-18 05:00:11.294528 | orchestrator | Wednesday 18 March 2026 04:59:42 +0000 (0:00:00.147) 0:16:14.173 ******* 2026-03-18 05:00:11.294547 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-18 05:00:11.294568 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-18 05:00:11.294586 | orchestrator | 2026-03-18 05:00:11.294601 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-18 05:00:11.294617 | orchestrator | Wednesday 18 March 2026 04:59:43 +0000 (0:00:00.847) 0:16:15.021 ******* 2026-03-18 05:00:11.294636 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-03-18 05:00:11.294656 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-03-18 05:00:11.294674 | orchestrator | 2026-03-18 05:00:11.294693 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-18 05:00:11.294713 | orchestrator | Wednesday 18 March 2026 04:59:45 +0000 (0:00:01.895) 0:16:16.917 ******* 2026-03-18 05:00:11.294732 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-03-18 05:00:11.294752 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-03-18 05:00:11.294772 | orchestrator | 2026-03-18 05:00:11.294792 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-18 05:00:11.294809 | orchestrator | Wednesday 18 March 2026 04:59:48 +0000 (0:00:03.550) 0:16:20.467 ******* 2026-03-18 05:00:11.294824 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.294841 | orchestrator | 2026-03-18 05:00:11.294860 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-18 05:00:11.294872 | orchestrator | Wednesday 18 March 2026 04:59:49 +0000 (0:00:00.270) 0:16:20.738 ******* 2026-03-18 05:00:11.294885 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.294904 | orchestrator | 2026-03-18 05:00:11.294948 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-18 05:00:11.294970 | orchestrator | Wednesday 18 March 2026 04:59:49 +0000 (0:00:00.256) 0:16:20.995 ******* 2026-03-18 05:00:11.294989 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.295008 | orchestrator | 2026-03-18 05:00:11.295027 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-18 05:00:11.295046 | orchestrator | Wednesday 18 March 2026 04:59:49 +0000 (0:00:00.304) 0:16:21.299 ******* 2026-03-18 05:00:11.295065 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.295085 | orchestrator | 2026-03-18 05:00:11.295106 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-18 05:00:11.295125 | orchestrator | Wednesday 18 March 2026 04:59:49 +0000 (0:00:00.122) 0:16:21.421 ******* 2026-03-18 05:00:11.295144 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:00:11.295163 | orchestrator | 2026-03-18 05:00:11.295182 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-18 05:00:11.295202 | orchestrator | Wednesday 18 March 2026 04:59:50 +0000 (0:00:00.469) 0:16:21.890 ******* 2026-03-18 05:00:11.295222 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-18 05:00:11.295241 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-18 05:00:11.295261 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-18 05:00:11.295280 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-18 05:00:11.295300 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-18 05:00:11.295332 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-03-18 05:00:11.295353 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:00:11.295371 | orchestrator | 2026-03-18 05:00:11.295390 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-18 05:00:11.295409 | orchestrator | 2026-03-18 05:00:11.295427 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:00:11.295446 | orchestrator | Wednesday 18 March 2026 05:00:10 +0000 (0:00:19.736) 0:16:41.627 ******* 2026-03-18 05:00:11.295465 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-18 05:00:11.295513 | orchestrator | 2026-03-18 05:00:11.295534 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:00:11.295554 | orchestrator | Wednesday 18 March 2026 05:00:10 +0000 (0:00:00.238) 0:16:41.866 ******* 2026-03-18 05:00:11.295573 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:11.295593 | orchestrator | 2026-03-18 05:00:11.295614 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:00:11.295634 | orchestrator | Wednesday 18 March 2026 05:00:10 +0000 (0:00:00.432) 0:16:42.298 ******* 2026-03-18 05:00:11.295654 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:11.295674 | orchestrator | 2026-03-18 05:00:11.295696 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:00:11.295716 | orchestrator | Wednesday 18 March 2026 05:00:10 +0000 (0:00:00.140) 0:16:42.439 ******* 2026-03-18 05:00:11.295746 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:11.295767 | orchestrator | 2026-03-18 05:00:11.295800 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:00:18.605851 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.460) 0:16:42.900 ******* 2026-03-18 05:00:18.606006 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.606101 | orchestrator | 2026-03-18 05:00:18.606121 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:00:18.606140 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.135) 0:16:43.035 ******* 2026-03-18 05:00:18.606157 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.606174 | orchestrator | 2026-03-18 05:00:18.606191 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:00:18.606208 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.133) 0:16:43.169 ******* 2026-03-18 05:00:18.606226 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.606242 | orchestrator | 2026-03-18 05:00:18.606297 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:00:18.606317 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.132) 0:16:43.301 ******* 2026-03-18 05:00:18.606334 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:18.606354 | orchestrator | 2026-03-18 05:00:18.606370 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:00:18.606388 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.115) 0:16:43.417 ******* 2026-03-18 05:00:18.606406 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.606422 | orchestrator | 2026-03-18 05:00:18.606440 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:00:18.606457 | orchestrator | Wednesday 18 March 2026 05:00:11 +0000 (0:00:00.126) 0:16:43.543 ******* 2026-03-18 05:00:18.606474 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:00:18.606491 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:00:18.606508 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:00:18.606524 | orchestrator | 2026-03-18 05:00:18.606541 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:00:18.606558 | orchestrator | Wednesday 18 March 2026 05:00:13 +0000 (0:00:01.145) 0:16:44.689 ******* 2026-03-18 05:00:18.606637 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.606656 | orchestrator | 2026-03-18 05:00:18.606673 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:00:18.606689 | orchestrator | Wednesday 18 March 2026 05:00:13 +0000 (0:00:00.270) 0:16:44.959 ******* 2026-03-18 05:00:18.606705 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:00:18.606720 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:00:18.606736 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:00:18.606753 | orchestrator | 2026-03-18 05:00:18.606767 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:00:18.606782 | orchestrator | Wednesday 18 March 2026 05:00:15 +0000 (0:00:01.905) 0:16:46.864 ******* 2026-03-18 05:00:18.606798 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:00:18.606814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:00:18.606831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:00:18.606847 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:18.606865 | orchestrator | 2026-03-18 05:00:18.606881 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:00:18.606899 | orchestrator | Wednesday 18 March 2026 05:00:15 +0000 (0:00:00.459) 0:16:47.324 ******* 2026-03-18 05:00:18.606918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.606966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.606983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.606999 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:18.607015 | orchestrator | 2026-03-18 05:00:18.607033 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:00:18.607050 | orchestrator | Wednesday 18 March 2026 05:00:16 +0000 (0:00:00.649) 0:16:47.973 ******* 2026-03-18 05:00:18.607068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.607125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.607138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:18.607158 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:18.607168 | orchestrator | 2026-03-18 05:00:18.607178 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:00:18.607187 | orchestrator | Wednesday 18 March 2026 05:00:16 +0000 (0:00:00.199) 0:16:48.173 ******* 2026-03-18 05:00:18.607199 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:00:13.873846', 'end': '2026-03-18 05:00:13.921506', 'delta': '0:00:00.047660', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:00:18.607213 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:00:14.476134', 'end': '2026-03-18 05:00:14.525217', 'delta': '0:00:00.049083', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:00:18.607223 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:00:15.038426', 'end': '2026-03-18 05:00:15.088438', 'delta': '0:00:00.050012', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:00:18.607233 | orchestrator | 2026-03-18 05:00:18.607243 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:00:18.607252 | orchestrator | Wednesday 18 March 2026 05:00:16 +0000 (0:00:00.229) 0:16:48.403 ******* 2026-03-18 05:00:18.607262 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.607271 | orchestrator | 2026-03-18 05:00:18.607279 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:00:18.607287 | orchestrator | Wednesday 18 March 2026 05:00:17 +0000 (0:00:00.282) 0:16:48.685 ******* 2026-03-18 05:00:18.607295 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:18.607302 | orchestrator | 2026-03-18 05:00:18.607310 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:00:18.607318 | orchestrator | Wednesday 18 March 2026 05:00:17 +0000 (0:00:00.259) 0:16:48.944 ******* 2026-03-18 05:00:18.607326 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:18.607333 | orchestrator | 2026-03-18 05:00:18.607341 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:00:18.607349 | orchestrator | Wednesday 18 March 2026 05:00:17 +0000 (0:00:00.147) 0:16:49.092 ******* 2026-03-18 05:00:18.607357 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:00:18.607365 | orchestrator | 2026-03-18 05:00:18.607376 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:00:18.607393 | orchestrator | Wednesday 18 March 2026 05:00:18 +0000 (0:00:00.973) 0:16:50.065 ******* 2026-03-18 05:00:18.607406 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:21.014454 | orchestrator | 2026-03-18 05:00:21.014551 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:00:21.014566 | orchestrator | Wednesday 18 March 2026 05:00:18 +0000 (0:00:00.153) 0:16:50.218 ******* 2026-03-18 05:00:21.014577 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014588 | orchestrator | 2026-03-18 05:00:21.014599 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:00:21.014609 | orchestrator | Wednesday 18 March 2026 05:00:18 +0000 (0:00:00.117) 0:16:50.336 ******* 2026-03-18 05:00:21.014618 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014628 | orchestrator | 2026-03-18 05:00:21.014638 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:00:21.014648 | orchestrator | Wednesday 18 March 2026 05:00:19 +0000 (0:00:00.930) 0:16:51.267 ******* 2026-03-18 05:00:21.014658 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014667 | orchestrator | 2026-03-18 05:00:21.014677 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:00:21.014687 | orchestrator | Wednesday 18 March 2026 05:00:19 +0000 (0:00:00.131) 0:16:51.398 ******* 2026-03-18 05:00:21.014696 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014706 | orchestrator | 2026-03-18 05:00:21.014716 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:00:21.014726 | orchestrator | Wednesday 18 March 2026 05:00:19 +0000 (0:00:00.148) 0:16:51.547 ******* 2026-03-18 05:00:21.014735 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:21.014746 | orchestrator | 2026-03-18 05:00:21.014756 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:00:21.014777 | orchestrator | Wednesday 18 March 2026 05:00:20 +0000 (0:00:00.191) 0:16:51.738 ******* 2026-03-18 05:00:21.014787 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014797 | orchestrator | 2026-03-18 05:00:21.014806 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:00:21.014816 | orchestrator | Wednesday 18 March 2026 05:00:20 +0000 (0:00:00.128) 0:16:51.867 ******* 2026-03-18 05:00:21.014826 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:21.014836 | orchestrator | 2026-03-18 05:00:21.014846 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:00:21.014855 | orchestrator | Wednesday 18 March 2026 05:00:20 +0000 (0:00:00.167) 0:16:52.035 ******* 2026-03-18 05:00:21.014865 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.014875 | orchestrator | 2026-03-18 05:00:21.014884 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:00:21.014895 | orchestrator | Wednesday 18 March 2026 05:00:20 +0000 (0:00:00.146) 0:16:52.181 ******* 2026-03-18 05:00:21.014904 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:21.014914 | orchestrator | 2026-03-18 05:00:21.014924 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:00:21.014994 | orchestrator | Wednesday 18 March 2026 05:00:20 +0000 (0:00:00.169) 0:16:52.351 ******* 2026-03-18 05:00:21.015009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015026 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}})  2026-03-18 05:00:21.015064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:00:21.015110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}})  2026-03-18 05:00:21.015124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:00:21.015164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.015212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}})  2026-03-18 05:00:21.015233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}})  2026-03-18 05:00:21.376806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.376917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:00:21.377007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.377036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:00:21.377050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:00:21.377064 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:21.377076 | orchestrator | 2026-03-18 05:00:21.377110 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:00:21.377133 | orchestrator | Wednesday 18 March 2026 05:00:21 +0000 (0:00:00.423) 0:16:52.774 ******* 2026-03-18 05:00:21.377153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.377173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.377203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.377229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.377251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.377285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.567762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568667 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568724 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568770 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:21.568777 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:35.519313 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:00:35.519435 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.519454 | orchestrator | 2026-03-18 05:00:35.519468 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:00:35.519480 | orchestrator | Wednesday 18 March 2026 05:00:21 +0000 (0:00:00.399) 0:16:53.173 ******* 2026-03-18 05:00:35.519491 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.519503 | orchestrator | 2026-03-18 05:00:35.519515 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:00:35.519525 | orchestrator | Wednesday 18 March 2026 05:00:22 +0000 (0:00:00.566) 0:16:53.740 ******* 2026-03-18 05:00:35.519536 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.519547 | orchestrator | 2026-03-18 05:00:35.519558 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:00:35.519569 | orchestrator | Wednesday 18 March 2026 05:00:22 +0000 (0:00:00.139) 0:16:53.879 ******* 2026-03-18 05:00:35.519580 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.519590 | orchestrator | 2026-03-18 05:00:35.519601 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:00:35.519612 | orchestrator | Wednesday 18 March 2026 05:00:22 +0000 (0:00:00.461) 0:16:54.341 ******* 2026-03-18 05:00:35.519624 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.519635 | orchestrator | 2026-03-18 05:00:35.519645 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:00:35.519656 | orchestrator | Wednesday 18 March 2026 05:00:23 +0000 (0:00:00.449) 0:16:54.790 ******* 2026-03-18 05:00:35.519667 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.519678 | orchestrator | 2026-03-18 05:00:35.519689 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:00:35.519699 | orchestrator | Wednesday 18 March 2026 05:00:23 +0000 (0:00:00.259) 0:16:55.050 ******* 2026-03-18 05:00:35.519710 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.519721 | orchestrator | 2026-03-18 05:00:35.519731 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:00:35.519759 | orchestrator | Wednesday 18 March 2026 05:00:23 +0000 (0:00:00.153) 0:16:55.204 ******* 2026-03-18 05:00:35.519770 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 05:00:35.519782 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 05:00:35.519793 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 05:00:35.519803 | orchestrator | 2026-03-18 05:00:35.519814 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:00:35.519827 | orchestrator | Wednesday 18 March 2026 05:00:24 +0000 (0:00:00.747) 0:16:55.951 ******* 2026-03-18 05:00:35.519839 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:00:35.519853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:00:35.519866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:00:35.519880 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.519892 | orchestrator | 2026-03-18 05:00:35.519903 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:00:35.519935 | orchestrator | Wednesday 18 March 2026 05:00:24 +0000 (0:00:00.185) 0:16:56.136 ******* 2026-03-18 05:00:35.519973 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-18 05:00:35.519985 | orchestrator | 2026-03-18 05:00:35.519996 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:00:35.520009 | orchestrator | Wednesday 18 March 2026 05:00:24 +0000 (0:00:00.256) 0:16:56.393 ******* 2026-03-18 05:00:35.520020 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520030 | orchestrator | 2026-03-18 05:00:35.520041 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:00:35.520052 | orchestrator | Wednesday 18 March 2026 05:00:24 +0000 (0:00:00.144) 0:16:56.537 ******* 2026-03-18 05:00:35.520063 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520074 | orchestrator | 2026-03-18 05:00:35.520085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:00:35.520096 | orchestrator | Wednesday 18 March 2026 05:00:25 +0000 (0:00:00.211) 0:16:56.748 ******* 2026-03-18 05:00:35.520106 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520117 | orchestrator | 2026-03-18 05:00:35.520128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:00:35.520138 | orchestrator | Wednesday 18 March 2026 05:00:25 +0000 (0:00:00.155) 0:16:56.904 ******* 2026-03-18 05:00:35.520149 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.520160 | orchestrator | 2026-03-18 05:00:35.520170 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:00:35.520181 | orchestrator | Wednesday 18 March 2026 05:00:25 +0000 (0:00:00.250) 0:16:57.155 ******* 2026-03-18 05:00:35.520192 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:00:35.520221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:00:35.520233 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:00:35.520243 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520254 | orchestrator | 2026-03-18 05:00:35.520265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:00:35.520276 | orchestrator | Wednesday 18 March 2026 05:00:26 +0000 (0:00:00.747) 0:16:57.902 ******* 2026-03-18 05:00:35.520287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:00:35.520297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:00:35.520308 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:00:35.520318 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520329 | orchestrator | 2026-03-18 05:00:35.520340 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:00:35.520350 | orchestrator | Wednesday 18 March 2026 05:00:27 +0000 (0:00:00.736) 0:16:58.638 ******* 2026-03-18 05:00:35.520361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:00:35.520372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:00:35.520382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:00:35.520393 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:35.520404 | orchestrator | 2026-03-18 05:00:35.520414 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:00:35.520425 | orchestrator | Wednesday 18 March 2026 05:00:28 +0000 (0:00:01.062) 0:16:59.701 ******* 2026-03-18 05:00:35.520436 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.520447 | orchestrator | 2026-03-18 05:00:35.520457 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:00:35.520468 | orchestrator | Wednesday 18 March 2026 05:00:28 +0000 (0:00:00.163) 0:16:59.865 ******* 2026-03-18 05:00:35.520479 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:00:35.520489 | orchestrator | 2026-03-18 05:00:35.520500 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:00:35.520519 | orchestrator | Wednesday 18 March 2026 05:00:28 +0000 (0:00:00.354) 0:17:00.219 ******* 2026-03-18 05:00:35.520530 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:00:35.520541 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:00:35.520552 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:00:35.520562 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:00:35.520573 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:00:35.520584 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:00:35.520594 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:00:35.520605 | orchestrator | 2026-03-18 05:00:35.520622 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:00:35.520633 | orchestrator | Wednesday 18 March 2026 05:00:29 +0000 (0:00:00.872) 0:17:01.092 ******* 2026-03-18 05:00:35.520644 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:00:35.520654 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:00:35.520665 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:00:35.520676 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:00:35.520687 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:00:35.520698 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:00:35.520709 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:00:35.520719 | orchestrator | 2026-03-18 05:00:35.520730 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-18 05:00:35.520740 | orchestrator | Wednesday 18 March 2026 05:00:31 +0000 (0:00:01.786) 0:17:02.878 ******* 2026-03-18 05:00:35.520751 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.520762 | orchestrator | 2026-03-18 05:00:35.520772 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-18 05:00:35.520783 | orchestrator | Wednesday 18 March 2026 05:00:31 +0000 (0:00:00.491) 0:17:03.370 ******* 2026-03-18 05:00:35.520794 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.520805 | orchestrator | 2026-03-18 05:00:35.520815 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-18 05:00:35.520826 | orchestrator | Wednesday 18 March 2026 05:00:31 +0000 (0:00:00.152) 0:17:03.522 ******* 2026-03-18 05:00:35.520837 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:35.520848 | orchestrator | 2026-03-18 05:00:35.520859 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-18 05:00:35.520869 | orchestrator | Wednesday 18 March 2026 05:00:32 +0000 (0:00:00.299) 0:17:03.822 ******* 2026-03-18 05:00:35.520880 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-18 05:00:35.520891 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-18 05:00:35.520902 | orchestrator | 2026-03-18 05:00:35.520912 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:00:35.520923 | orchestrator | Wednesday 18 March 2026 05:00:35 +0000 (0:00:03.107) 0:17:06.930 ******* 2026-03-18 05:00:35.520934 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-18 05:00:35.520973 | orchestrator | 2026-03-18 05:00:35.520984 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:00:35.521003 | orchestrator | Wednesday 18 March 2026 05:00:35 +0000 (0:00:00.193) 0:17:07.123 ******* 2026-03-18 05:00:47.569092 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-18 05:00:47.569235 | orchestrator | 2026-03-18 05:00:47.569254 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:00:47.569267 | orchestrator | Wednesday 18 March 2026 05:00:35 +0000 (0:00:00.479) 0:17:07.602 ******* 2026-03-18 05:00:47.569279 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569292 | orchestrator | 2026-03-18 05:00:47.569304 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:00:47.569316 | orchestrator | Wednesday 18 March 2026 05:00:36 +0000 (0:00:00.142) 0:17:07.745 ******* 2026-03-18 05:00:47.569327 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569340 | orchestrator | 2026-03-18 05:00:47.569351 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:00:47.569363 | orchestrator | Wednesday 18 March 2026 05:00:36 +0000 (0:00:00.496) 0:17:08.242 ******* 2026-03-18 05:00:47.569374 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569385 | orchestrator | 2026-03-18 05:00:47.569397 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:00:47.569408 | orchestrator | Wednesday 18 March 2026 05:00:37 +0000 (0:00:00.551) 0:17:08.793 ******* 2026-03-18 05:00:47.569419 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569431 | orchestrator | 2026-03-18 05:00:47.569443 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:00:47.569454 | orchestrator | Wednesday 18 March 2026 05:00:37 +0000 (0:00:00.543) 0:17:09.336 ******* 2026-03-18 05:00:47.569465 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569477 | orchestrator | 2026-03-18 05:00:47.569488 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:00:47.569500 | orchestrator | Wednesday 18 March 2026 05:00:37 +0000 (0:00:00.169) 0:17:09.506 ******* 2026-03-18 05:00:47.569512 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569523 | orchestrator | 2026-03-18 05:00:47.569535 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:00:47.569547 | orchestrator | Wednesday 18 March 2026 05:00:38 +0000 (0:00:00.142) 0:17:09.649 ******* 2026-03-18 05:00:47.569558 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569570 | orchestrator | 2026-03-18 05:00:47.569584 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:00:47.569597 | orchestrator | Wednesday 18 March 2026 05:00:38 +0000 (0:00:00.157) 0:17:09.806 ******* 2026-03-18 05:00:47.569611 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569624 | orchestrator | 2026-03-18 05:00:47.569637 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:00:47.569650 | orchestrator | Wednesday 18 March 2026 05:00:38 +0000 (0:00:00.556) 0:17:10.363 ******* 2026-03-18 05:00:47.569663 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569676 | orchestrator | 2026-03-18 05:00:47.569690 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:00:47.569703 | orchestrator | Wednesday 18 March 2026 05:00:39 +0000 (0:00:00.538) 0:17:10.901 ******* 2026-03-18 05:00:47.569716 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569730 | orchestrator | 2026-03-18 05:00:47.569758 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:00:47.569771 | orchestrator | Wednesday 18 March 2026 05:00:39 +0000 (0:00:00.175) 0:17:11.076 ******* 2026-03-18 05:00:47.569785 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.569799 | orchestrator | 2026-03-18 05:00:47.569813 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:00:47.569827 | orchestrator | Wednesday 18 March 2026 05:00:39 +0000 (0:00:00.140) 0:17:11.217 ******* 2026-03-18 05:00:47.569841 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569854 | orchestrator | 2026-03-18 05:00:47.569868 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:00:47.569882 | orchestrator | Wednesday 18 March 2026 05:00:39 +0000 (0:00:00.161) 0:17:11.378 ******* 2026-03-18 05:00:47.569896 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.569918 | orchestrator | 2026-03-18 05:00:47.569932 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:00:47.569944 | orchestrator | Wednesday 18 March 2026 05:00:39 +0000 (0:00:00.146) 0:17:11.525 ******* 2026-03-18 05:00:47.569987 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.570002 | orchestrator | 2026-03-18 05:00:47.570094 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:00:47.570115 | orchestrator | Wednesday 18 March 2026 05:00:40 +0000 (0:00:00.547) 0:17:12.073 ******* 2026-03-18 05:00:47.570134 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570197 | orchestrator | 2026-03-18 05:00:47.570216 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:00:47.570234 | orchestrator | Wednesday 18 March 2026 05:00:40 +0000 (0:00:00.171) 0:17:12.245 ******* 2026-03-18 05:00:47.570245 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570256 | orchestrator | 2026-03-18 05:00:47.570266 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:00:47.570277 | orchestrator | Wednesday 18 March 2026 05:00:40 +0000 (0:00:00.131) 0:17:12.376 ******* 2026-03-18 05:00:47.570288 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570299 | orchestrator | 2026-03-18 05:00:47.570310 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:00:47.570320 | orchestrator | Wednesday 18 March 2026 05:00:40 +0000 (0:00:00.142) 0:17:12.519 ******* 2026-03-18 05:00:47.570331 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.570342 | orchestrator | 2026-03-18 05:00:47.570352 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:00:47.570363 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.176) 0:17:12.696 ******* 2026-03-18 05:00:47.570374 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.570385 | orchestrator | 2026-03-18 05:00:47.570396 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:00:47.570406 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.231) 0:17:12.927 ******* 2026-03-18 05:00:47.570417 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570428 | orchestrator | 2026-03-18 05:00:47.570459 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:00:47.570471 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.128) 0:17:13.056 ******* 2026-03-18 05:00:47.570481 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570493 | orchestrator | 2026-03-18 05:00:47.570503 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:00:47.570515 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.138) 0:17:13.195 ******* 2026-03-18 05:00:47.570525 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570536 | orchestrator | 2026-03-18 05:00:47.570555 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:00:47.570573 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.125) 0:17:13.320 ******* 2026-03-18 05:00:47.570591 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570608 | orchestrator | 2026-03-18 05:00:47.570625 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:00:47.570644 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.128) 0:17:13.449 ******* 2026-03-18 05:00:47.570663 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570681 | orchestrator | 2026-03-18 05:00:47.570697 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:00:47.570708 | orchestrator | Wednesday 18 March 2026 05:00:41 +0000 (0:00:00.131) 0:17:13.581 ******* 2026-03-18 05:00:47.570719 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570737 | orchestrator | 2026-03-18 05:00:47.570755 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:00:47.570773 | orchestrator | Wednesday 18 March 2026 05:00:42 +0000 (0:00:00.120) 0:17:13.702 ******* 2026-03-18 05:00:47.570789 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570820 | orchestrator | 2026-03-18 05:00:47.570840 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:00:47.570859 | orchestrator | Wednesday 18 March 2026 05:00:42 +0000 (0:00:00.445) 0:17:14.147 ******* 2026-03-18 05:00:47.570879 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570897 | orchestrator | 2026-03-18 05:00:47.570915 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:00:47.570926 | orchestrator | Wednesday 18 March 2026 05:00:42 +0000 (0:00:00.126) 0:17:14.273 ******* 2026-03-18 05:00:47.570937 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.570995 | orchestrator | 2026-03-18 05:00:47.571009 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:00:47.571020 | orchestrator | Wednesday 18 March 2026 05:00:42 +0000 (0:00:00.112) 0:17:14.386 ******* 2026-03-18 05:00:47.571031 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571042 | orchestrator | 2026-03-18 05:00:47.571053 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:00:47.571063 | orchestrator | Wednesday 18 March 2026 05:00:42 +0000 (0:00:00.116) 0:17:14.503 ******* 2026-03-18 05:00:47.571074 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571085 | orchestrator | 2026-03-18 05:00:47.571096 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:00:47.571114 | orchestrator | Wednesday 18 March 2026 05:00:43 +0000 (0:00:00.130) 0:17:14.634 ******* 2026-03-18 05:00:47.571125 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571136 | orchestrator | 2026-03-18 05:00:47.571147 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:00:47.571157 | orchestrator | Wednesday 18 March 2026 05:00:43 +0000 (0:00:00.183) 0:17:14.817 ******* 2026-03-18 05:00:47.571168 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.571182 | orchestrator | 2026-03-18 05:00:47.571202 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:00:47.571221 | orchestrator | Wednesday 18 March 2026 05:00:44 +0000 (0:00:00.899) 0:17:15.717 ******* 2026-03-18 05:00:47.571240 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.571259 | orchestrator | 2026-03-18 05:00:47.571278 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:00:47.571298 | orchestrator | Wednesday 18 March 2026 05:00:45 +0000 (0:00:01.274) 0:17:16.991 ******* 2026-03-18 05:00:47.571318 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-18 05:00:47.571339 | orchestrator | 2026-03-18 05:00:47.571358 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:00:47.571378 | orchestrator | Wednesday 18 March 2026 05:00:45 +0000 (0:00:00.192) 0:17:17.184 ******* 2026-03-18 05:00:47.571397 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571417 | orchestrator | 2026-03-18 05:00:47.571436 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:00:47.571454 | orchestrator | Wednesday 18 March 2026 05:00:45 +0000 (0:00:00.133) 0:17:17.317 ******* 2026-03-18 05:00:47.571465 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571476 | orchestrator | 2026-03-18 05:00:47.571487 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:00:47.571497 | orchestrator | Wednesday 18 March 2026 05:00:45 +0000 (0:00:00.138) 0:17:17.455 ******* 2026-03-18 05:00:47.571508 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:00:47.571519 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:00:47.571529 | orchestrator | 2026-03-18 05:00:47.571540 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:00:47.571551 | orchestrator | Wednesday 18 March 2026 05:00:46 +0000 (0:00:01.110) 0:17:18.566 ******* 2026-03-18 05:00:47.571561 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:00:47.571572 | orchestrator | 2026-03-18 05:00:47.571583 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:00:47.571603 | orchestrator | Wednesday 18 March 2026 05:00:47 +0000 (0:00:00.457) 0:17:19.023 ******* 2026-03-18 05:00:47.571614 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:00:47.571625 | orchestrator | 2026-03-18 05:00:47.571636 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:00:47.571658 | orchestrator | Wednesday 18 March 2026 05:00:47 +0000 (0:00:00.151) 0:17:19.175 ******* 2026-03-18 05:01:02.660664 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.660774 | orchestrator | 2026-03-18 05:01:02.660790 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:01:02.660802 | orchestrator | Wednesday 18 March 2026 05:00:47 +0000 (0:00:00.189) 0:17:19.364 ******* 2026-03-18 05:01:02.660812 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.660822 | orchestrator | 2026-03-18 05:01:02.660833 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:01:02.660843 | orchestrator | Wednesday 18 March 2026 05:00:47 +0000 (0:00:00.147) 0:17:19.512 ******* 2026-03-18 05:01:02.660854 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-18 05:01:02.660864 | orchestrator | 2026-03-18 05:01:02.660874 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:01:02.660884 | orchestrator | Wednesday 18 March 2026 05:00:48 +0000 (0:00:00.269) 0:17:19.781 ******* 2026-03-18 05:01:02.660894 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:02.660904 | orchestrator | 2026-03-18 05:01:02.660914 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:01:02.660925 | orchestrator | Wednesday 18 March 2026 05:00:48 +0000 (0:00:00.737) 0:17:20.519 ******* 2026-03-18 05:01:02.660934 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:01:02.660944 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:01:02.660953 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:01:02.661044 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661063 | orchestrator | 2026-03-18 05:01:02.661073 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:01:02.661083 | orchestrator | Wednesday 18 March 2026 05:00:49 +0000 (0:00:00.159) 0:17:20.678 ******* 2026-03-18 05:01:02.661093 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661102 | orchestrator | 2026-03-18 05:01:02.661112 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:01:02.661122 | orchestrator | Wednesday 18 March 2026 05:00:49 +0000 (0:00:00.131) 0:17:20.809 ******* 2026-03-18 05:01:02.661131 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661141 | orchestrator | 2026-03-18 05:01:02.661151 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:01:02.661160 | orchestrator | Wednesday 18 March 2026 05:00:49 +0000 (0:00:00.175) 0:17:20.985 ******* 2026-03-18 05:01:02.661170 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661179 | orchestrator | 2026-03-18 05:01:02.661189 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:01:02.661199 | orchestrator | Wednesday 18 March 2026 05:00:49 +0000 (0:00:00.148) 0:17:21.133 ******* 2026-03-18 05:01:02.661208 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661221 | orchestrator | 2026-03-18 05:01:02.661248 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:01:02.661261 | orchestrator | Wednesday 18 March 2026 05:00:49 +0000 (0:00:00.170) 0:17:21.304 ******* 2026-03-18 05:01:02.661273 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661284 | orchestrator | 2026-03-18 05:01:02.661295 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:01:02.661307 | orchestrator | Wednesday 18 March 2026 05:00:50 +0000 (0:00:00.464) 0:17:21.768 ******* 2026-03-18 05:01:02.661341 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:02.661353 | orchestrator | 2026-03-18 05:01:02.661365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:01:02.661377 | orchestrator | Wednesday 18 March 2026 05:00:51 +0000 (0:00:01.529) 0:17:23.298 ******* 2026-03-18 05:01:02.661388 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:02.661397 | orchestrator | 2026-03-18 05:01:02.661407 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:01:02.661416 | orchestrator | Wednesday 18 March 2026 05:00:51 +0000 (0:00:00.159) 0:17:23.457 ******* 2026-03-18 05:01:02.661426 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-18 05:01:02.661435 | orchestrator | 2026-03-18 05:01:02.661445 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:01:02.661454 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.256) 0:17:23.713 ******* 2026-03-18 05:01:02.661464 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661473 | orchestrator | 2026-03-18 05:01:02.661483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:01:02.661492 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.149) 0:17:23.862 ******* 2026-03-18 05:01:02.661502 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661512 | orchestrator | 2026-03-18 05:01:02.661521 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:01:02.661531 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.167) 0:17:24.030 ******* 2026-03-18 05:01:02.661540 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661550 | orchestrator | 2026-03-18 05:01:02.661561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:01:02.661571 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.165) 0:17:24.196 ******* 2026-03-18 05:01:02.661582 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661593 | orchestrator | 2026-03-18 05:01:02.661603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:01:02.661614 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.154) 0:17:24.351 ******* 2026-03-18 05:01:02.661624 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661635 | orchestrator | 2026-03-18 05:01:02.661646 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:01:02.661657 | orchestrator | Wednesday 18 March 2026 05:00:52 +0000 (0:00:00.165) 0:17:24.516 ******* 2026-03-18 05:01:02.661667 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661678 | orchestrator | 2026-03-18 05:01:02.661706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:01:02.661717 | orchestrator | Wednesday 18 March 2026 05:00:53 +0000 (0:00:00.173) 0:17:24.690 ******* 2026-03-18 05:01:02.661728 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661739 | orchestrator | 2026-03-18 05:01:02.661749 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:01:02.661760 | orchestrator | Wednesday 18 March 2026 05:00:53 +0000 (0:00:00.170) 0:17:24.861 ******* 2026-03-18 05:01:02.661771 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.661781 | orchestrator | 2026-03-18 05:01:02.661792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:01:02.661803 | orchestrator | Wednesday 18 March 2026 05:00:53 +0000 (0:00:00.181) 0:17:25.042 ******* 2026-03-18 05:01:02.661813 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:02.661824 | orchestrator | 2026-03-18 05:01:02.661835 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:01:02.661845 | orchestrator | Wednesday 18 March 2026 05:00:53 +0000 (0:00:00.531) 0:17:25.574 ******* 2026-03-18 05:01:02.661856 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-18 05:01:02.661867 | orchestrator | 2026-03-18 05:01:02.661878 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:01:02.661898 | orchestrator | Wednesday 18 March 2026 05:00:54 +0000 (0:00:00.223) 0:17:25.797 ******* 2026-03-18 05:01:02.661909 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-18 05:01:02.661921 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-18 05:01:02.661932 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-18 05:01:02.661943 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-18 05:01:02.661954 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-18 05:01:02.661991 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-18 05:01:02.662002 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-18 05:01:02.662013 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:01:02.662088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:01:02.662100 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:01:02.662110 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:01:02.662121 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:01:02.662132 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:01:02.662142 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:01:02.662153 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-18 05:01:02.662164 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-18 05:01:02.662174 | orchestrator | 2026-03-18 05:01:02.662191 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:01:02.662202 | orchestrator | Wednesday 18 March 2026 05:00:59 +0000 (0:00:05.486) 0:17:31.284 ******* 2026-03-18 05:01:02.662213 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-18 05:01:02.662224 | orchestrator | 2026-03-18 05:01:02.662234 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:01:02.662245 | orchestrator | Wednesday 18 March 2026 05:00:59 +0000 (0:00:00.212) 0:17:31.496 ******* 2026-03-18 05:01:02.662256 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:01:02.662268 | orchestrator | 2026-03-18 05:01:02.662279 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:01:02.662289 | orchestrator | Wednesday 18 March 2026 05:01:00 +0000 (0:00:00.532) 0:17:32.028 ******* 2026-03-18 05:01:02.662300 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:01:02.662311 | orchestrator | 2026-03-18 05:01:02.662321 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:01:02.662332 | orchestrator | Wednesday 18 March 2026 05:01:01 +0000 (0:00:01.042) 0:17:33.071 ******* 2026-03-18 05:01:02.662342 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662353 | orchestrator | 2026-03-18 05:01:02.662364 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:01:02.662374 | orchestrator | Wednesday 18 March 2026 05:01:01 +0000 (0:00:00.159) 0:17:33.231 ******* 2026-03-18 05:01:02.662385 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662396 | orchestrator | 2026-03-18 05:01:02.662406 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:01:02.662417 | orchestrator | Wednesday 18 March 2026 05:01:01 +0000 (0:00:00.142) 0:17:33.374 ******* 2026-03-18 05:01:02.662427 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662438 | orchestrator | 2026-03-18 05:01:02.662449 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:01:02.662459 | orchestrator | Wednesday 18 March 2026 05:01:01 +0000 (0:00:00.149) 0:17:33.523 ******* 2026-03-18 05:01:02.662470 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662481 | orchestrator | 2026-03-18 05:01:02.662500 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:01:02.662511 | orchestrator | Wednesday 18 March 2026 05:01:02 +0000 (0:00:00.139) 0:17:33.662 ******* 2026-03-18 05:01:02.662522 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662532 | orchestrator | 2026-03-18 05:01:02.662543 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:01:02.662554 | orchestrator | Wednesday 18 March 2026 05:01:02 +0000 (0:00:00.162) 0:17:33.825 ******* 2026-03-18 05:01:02.662565 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:02.662575 | orchestrator | 2026-03-18 05:01:02.662594 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:01:22.984536 | orchestrator | Wednesday 18 March 2026 05:01:02 +0000 (0:00:00.439) 0:17:34.264 ******* 2026-03-18 05:01:22.984648 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.984665 | orchestrator | 2026-03-18 05:01:22.984678 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:01:22.984690 | orchestrator | Wednesday 18 March 2026 05:01:02 +0000 (0:00:00.149) 0:17:34.413 ******* 2026-03-18 05:01:22.984701 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.984712 | orchestrator | 2026-03-18 05:01:22.984724 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:01:22.984735 | orchestrator | Wednesday 18 March 2026 05:01:02 +0000 (0:00:00.158) 0:17:34.572 ******* 2026-03-18 05:01:22.984746 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.984757 | orchestrator | 2026-03-18 05:01:22.984769 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:01:22.984780 | orchestrator | Wednesday 18 March 2026 05:01:03 +0000 (0:00:00.147) 0:17:34.719 ******* 2026-03-18 05:01:22.984791 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.984801 | orchestrator | 2026-03-18 05:01:22.984812 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:01:22.984823 | orchestrator | Wednesday 18 March 2026 05:01:03 +0000 (0:00:00.141) 0:17:34.861 ******* 2026-03-18 05:01:22.984834 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.984846 | orchestrator | 2026-03-18 05:01:22.984857 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:01:22.984868 | orchestrator | Wednesday 18 March 2026 05:01:03 +0000 (0:00:00.230) 0:17:35.091 ******* 2026-03-18 05:01:22.984879 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:01:22.984890 | orchestrator | 2026-03-18 05:01:22.984901 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:01:22.984912 | orchestrator | Wednesday 18 March 2026 05:01:07 +0000 (0:00:03.548) 0:17:38.639 ******* 2026-03-18 05:01:22.984923 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:01:22.984935 | orchestrator | 2026-03-18 05:01:22.984947 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:01:22.984958 | orchestrator | Wednesday 18 March 2026 05:01:07 +0000 (0:00:00.194) 0:17:38.834 ******* 2026-03-18 05:01:22.985038 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-18 05:01:22.985058 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-18 05:01:22.985071 | orchestrator | 2026-03-18 05:01:22.985107 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:01:22.985120 | orchestrator | Wednesday 18 March 2026 05:01:13 +0000 (0:00:06.691) 0:17:45.525 ******* 2026-03-18 05:01:22.985133 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985146 | orchestrator | 2026-03-18 05:01:22.985159 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:01:22.985172 | orchestrator | Wednesday 18 March 2026 05:01:14 +0000 (0:00:00.143) 0:17:45.668 ******* 2026-03-18 05:01:22.985184 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985196 | orchestrator | 2026-03-18 05:01:22.985208 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:01:22.985221 | orchestrator | Wednesday 18 March 2026 05:01:14 +0000 (0:00:00.153) 0:17:45.821 ******* 2026-03-18 05:01:22.985233 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985245 | orchestrator | 2026-03-18 05:01:22.985258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:01:22.985270 | orchestrator | Wednesday 18 March 2026 05:01:14 +0000 (0:00:00.172) 0:17:45.993 ******* 2026-03-18 05:01:22.985282 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985295 | orchestrator | 2026-03-18 05:01:22.985307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:01:22.985320 | orchestrator | Wednesday 18 March 2026 05:01:14 +0000 (0:00:00.158) 0:17:46.152 ******* 2026-03-18 05:01:22.985332 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985345 | orchestrator | 2026-03-18 05:01:22.985357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:01:22.985369 | orchestrator | Wednesday 18 March 2026 05:01:15 +0000 (0:00:00.488) 0:17:46.641 ******* 2026-03-18 05:01:22.985382 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.985395 | orchestrator | 2026-03-18 05:01:22.985407 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:01:22.985417 | orchestrator | Wednesday 18 March 2026 05:01:15 +0000 (0:00:00.283) 0:17:46.924 ******* 2026-03-18 05:01:22.985428 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:01:22.985439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:01:22.985450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:01:22.985461 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985472 | orchestrator | 2026-03-18 05:01:22.985483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:01:22.985510 | orchestrator | Wednesday 18 March 2026 05:01:15 +0000 (0:00:00.501) 0:17:47.426 ******* 2026-03-18 05:01:22.985522 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:01:22.985533 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:01:22.985544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:01:22.985555 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985566 | orchestrator | 2026-03-18 05:01:22.985576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:01:22.985587 | orchestrator | Wednesday 18 March 2026 05:01:16 +0000 (0:00:00.436) 0:17:47.863 ******* 2026-03-18 05:01:22.985598 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:01:22.985609 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:01:22.985620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:01:22.985631 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.985641 | orchestrator | 2026-03-18 05:01:22.985652 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:01:22.985663 | orchestrator | Wednesday 18 March 2026 05:01:16 +0000 (0:00:00.446) 0:17:48.309 ******* 2026-03-18 05:01:22.985675 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.985686 | orchestrator | 2026-03-18 05:01:22.985697 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:01:22.985716 | orchestrator | Wednesday 18 March 2026 05:01:16 +0000 (0:00:00.188) 0:17:48.498 ******* 2026-03-18 05:01:22.985728 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:01:22.985738 | orchestrator | 2026-03-18 05:01:22.985749 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:01:22.985760 | orchestrator | Wednesday 18 March 2026 05:01:17 +0000 (0:00:00.467) 0:17:48.966 ******* 2026-03-18 05:01:22.985771 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.985782 | orchestrator | 2026-03-18 05:01:22.985793 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-18 05:01:22.985804 | orchestrator | Wednesday 18 March 2026 05:01:18 +0000 (0:00:00.842) 0:17:49.809 ******* 2026-03-18 05:01:22.985814 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.985825 | orchestrator | 2026-03-18 05:01:22.985836 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-18 05:01:22.985854 | orchestrator | Wednesday 18 March 2026 05:01:18 +0000 (0:00:00.152) 0:17:49.961 ******* 2026-03-18 05:01:22.985872 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:01:22.985890 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:01:22.985910 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:01:22.985930 | orchestrator | 2026-03-18 05:01:22.985957 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-18 05:01:22.985969 | orchestrator | Wednesday 18 March 2026 05:01:19 +0000 (0:00:01.022) 0:17:50.984 ******* 2026-03-18 05:01:22.986097 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-18 05:01:22.986111 | orchestrator | 2026-03-18 05:01:22.986121 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-18 05:01:22.986132 | orchestrator | Wednesday 18 March 2026 05:01:19 +0000 (0:00:00.518) 0:17:51.502 ******* 2026-03-18 05:01:22.986153 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.986164 | orchestrator | 2026-03-18 05:01:22.986174 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-18 05:01:22.986185 | orchestrator | Wednesday 18 March 2026 05:01:20 +0000 (0:00:00.134) 0:17:51.636 ******* 2026-03-18 05:01:22.986196 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.986207 | orchestrator | 2026-03-18 05:01:22.986218 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-18 05:01:22.986228 | orchestrator | Wednesday 18 March 2026 05:01:20 +0000 (0:00:00.142) 0:17:51.779 ******* 2026-03-18 05:01:22.986239 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.986250 | orchestrator | 2026-03-18 05:01:22.986261 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-18 05:01:22.986271 | orchestrator | Wednesday 18 March 2026 05:01:20 +0000 (0:00:00.456) 0:17:52.236 ******* 2026-03-18 05:01:22.986282 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:01:22.986293 | orchestrator | 2026-03-18 05:01:22.986303 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-18 05:01:22.986314 | orchestrator | Wednesday 18 March 2026 05:01:20 +0000 (0:00:00.168) 0:17:52.404 ******* 2026-03-18 05:01:22.986324 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 05:01:22.986335 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 05:01:22.986346 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 05:01:22.986357 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 05:01:22.986368 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 05:01:22.986378 | orchestrator | 2026-03-18 05:01:22.986389 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-18 05:01:22.986399 | orchestrator | Wednesday 18 March 2026 05:01:22 +0000 (0:00:01.851) 0:17:54.256 ******* 2026-03-18 05:01:22.986420 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:01:22.986431 | orchestrator | 2026-03-18 05:01:22.986442 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-18 05:01:22.986453 | orchestrator | Wednesday 18 March 2026 05:01:22 +0000 (0:00:00.132) 0:17:54.388 ******* 2026-03-18 05:01:22.986464 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-18 05:01:22.986474 | orchestrator | 2026-03-18 05:01:22.986485 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-18 05:02:05.406511 | orchestrator | Wednesday 18 March 2026 05:01:22 +0000 (0:00:00.199) 0:17:54.588 ******* 2026-03-18 05:02:05.406628 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 05:02:05.406645 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-18 05:02:05.406658 | orchestrator | 2026-03-18 05:02:05.406671 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-18 05:02:05.406682 | orchestrator | Wednesday 18 March 2026 05:01:23 +0000 (0:00:00.833) 0:17:55.421 ******* 2026-03-18 05:02:05.406693 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:02:05.406705 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:02:05.406716 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:02:05.406727 | orchestrator | 2026-03-18 05:02:05.406738 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:02:05.406749 | orchestrator | Wednesday 18 March 2026 05:01:26 +0000 (0:00:02.202) 0:17:57.624 ******* 2026-03-18 05:02:05.406760 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-18 05:02:05.406771 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:02:05.406783 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.406794 | orchestrator | 2026-03-18 05:02:05.406805 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-18 05:02:05.406815 | orchestrator | Wednesday 18 March 2026 05:01:27 +0000 (0:00:00.998) 0:17:58.622 ******* 2026-03-18 05:02:05.406826 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.406837 | orchestrator | 2026-03-18 05:02:05.406848 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-18 05:02:05.406859 | orchestrator | Wednesday 18 March 2026 05:01:27 +0000 (0:00:00.227) 0:17:58.850 ******* 2026-03-18 05:02:05.406870 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.406880 | orchestrator | 2026-03-18 05:02:05.406891 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-18 05:02:05.406902 | orchestrator | Wednesday 18 March 2026 05:01:27 +0000 (0:00:00.142) 0:17:58.992 ******* 2026-03-18 05:02:05.406913 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.406924 | orchestrator | 2026-03-18 05:02:05.406935 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-18 05:02:05.406946 | orchestrator | Wednesday 18 March 2026 05:01:27 +0000 (0:00:00.469) 0:17:59.462 ******* 2026-03-18 05:02:05.406957 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-18 05:02:05.406969 | orchestrator | 2026-03-18 05:02:05.406979 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-18 05:02:05.406990 | orchestrator | Wednesday 18 March 2026 05:01:28 +0000 (0:00:00.233) 0:17:59.696 ******* 2026-03-18 05:02:05.407001 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.407039 | orchestrator | 2026-03-18 05:02:05.407069 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-18 05:02:05.407083 | orchestrator | Wednesday 18 March 2026 05:01:28 +0000 (0:00:00.474) 0:18:00.171 ******* 2026-03-18 05:02:05.407096 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.407110 | orchestrator | 2026-03-18 05:02:05.407122 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-18 05:02:05.407135 | orchestrator | Wednesday 18 March 2026 05:01:30 +0000 (0:00:02.265) 0:18:02.437 ******* 2026-03-18 05:02:05.407173 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-18 05:02:05.407186 | orchestrator | 2026-03-18 05:02:05.407200 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-18 05:02:05.407213 | orchestrator | Wednesday 18 March 2026 05:01:31 +0000 (0:00:00.239) 0:18:02.677 ******* 2026-03-18 05:02:05.407226 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.407239 | orchestrator | 2026-03-18 05:02:05.407251 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-18 05:02:05.407265 | orchestrator | Wednesday 18 March 2026 05:01:32 +0000 (0:00:00.958) 0:18:03.635 ******* 2026-03-18 05:02:05.407277 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.407290 | orchestrator | 2026-03-18 05:02:05.407303 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-18 05:02:05.407315 | orchestrator | Wednesday 18 March 2026 05:01:32 +0000 (0:00:00.934) 0:18:04.570 ******* 2026-03-18 05:02:05.407328 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:02:05.407341 | orchestrator | 2026-03-18 05:02:05.407354 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-18 05:02:05.407366 | orchestrator | Wednesday 18 March 2026 05:01:34 +0000 (0:00:01.230) 0:18:05.801 ******* 2026-03-18 05:02:05.407379 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407393 | orchestrator | 2026-03-18 05:02:05.407407 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-18 05:02:05.407418 | orchestrator | Wednesday 18 March 2026 05:01:34 +0000 (0:00:00.169) 0:18:05.971 ******* 2026-03-18 05:02:05.407429 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407439 | orchestrator | 2026-03-18 05:02:05.407450 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-18 05:02:05.407461 | orchestrator | Wednesday 18 March 2026 05:01:34 +0000 (0:00:00.155) 0:18:06.126 ******* 2026-03-18 05:02:05.407471 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:02:05.407482 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-18 05:02:05.407493 | orchestrator | 2026-03-18 05:02:05.407504 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-18 05:02:05.407515 | orchestrator | Wednesday 18 March 2026 05:01:35 +0000 (0:00:00.850) 0:18:06.976 ******* 2026-03-18 05:02:05.407525 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:02:05.407536 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-18 05:02:05.407547 | orchestrator | 2026-03-18 05:02:05.407557 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-18 05:02:05.407568 | orchestrator | Wednesday 18 March 2026 05:01:37 +0000 (0:00:02.524) 0:18:09.501 ******* 2026-03-18 05:02:05.407579 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-03-18 05:02:05.407607 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-18 05:02:05.407618 | orchestrator | 2026-03-18 05:02:05.407629 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-18 05:02:05.407640 | orchestrator | Wednesday 18 March 2026 05:01:41 +0000 (0:00:03.773) 0:18:13.275 ******* 2026-03-18 05:02:05.407651 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407661 | orchestrator | 2026-03-18 05:02:05.407672 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-18 05:02:05.407683 | orchestrator | Wednesday 18 March 2026 05:01:41 +0000 (0:00:00.252) 0:18:13.527 ******* 2026-03-18 05:02:05.407694 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407704 | orchestrator | 2026-03-18 05:02:05.407715 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-18 05:02:05.407726 | orchestrator | Wednesday 18 March 2026 05:01:42 +0000 (0:00:00.247) 0:18:13.774 ******* 2026-03-18 05:02:05.407737 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407747 | orchestrator | 2026-03-18 05:02:05.407758 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-18 05:02:05.407768 | orchestrator | Wednesday 18 March 2026 05:01:42 +0000 (0:00:00.321) 0:18:14.096 ******* 2026-03-18 05:02:05.407787 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407798 | orchestrator | 2026-03-18 05:02:05.407809 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-18 05:02:05.407820 | orchestrator | Wednesday 18 March 2026 05:01:42 +0000 (0:00:00.143) 0:18:14.239 ******* 2026-03-18 05:02:05.407830 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:02:05.407841 | orchestrator | 2026-03-18 05:02:05.407852 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-18 05:02:05.407863 | orchestrator | Wednesday 18 March 2026 05:01:42 +0000 (0:00:00.167) 0:18:14.407 ******* 2026-03-18 05:02:05.407873 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-18 05:02:05.407885 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-18 05:02:05.407896 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-18 05:02:05.407907 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-18 05:02:05.407918 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-03-18 05:02:05.407929 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (595 retries left). 2026-03-18 05:02:05.407940 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:02:05.407950 | orchestrator | 2026-03-18 05:02:05.407967 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-18 05:02:05.407978 | orchestrator | 2026-03-18 05:02:05.407989 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:02:05.408000 | orchestrator | Wednesday 18 March 2026 05:02:02 +0000 (0:00:19.462) 0:18:33.869 ******* 2026-03-18 05:02:05.408057 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-18 05:02:05.408069 | orchestrator | 2026-03-18 05:02:05.408080 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:02:05.408091 | orchestrator | Wednesday 18 March 2026 05:02:02 +0000 (0:00:00.263) 0:18:34.132 ******* 2026-03-18 05:02:05.408102 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408113 | orchestrator | 2026-03-18 05:02:05.408123 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:02:05.408134 | orchestrator | Wednesday 18 March 2026 05:02:02 +0000 (0:00:00.466) 0:18:34.599 ******* 2026-03-18 05:02:05.408145 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408156 | orchestrator | 2026-03-18 05:02:05.408167 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:02:05.408177 | orchestrator | Wednesday 18 March 2026 05:02:03 +0000 (0:00:00.456) 0:18:35.055 ******* 2026-03-18 05:02:05.408188 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408199 | orchestrator | 2026-03-18 05:02:05.408209 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:02:05.408220 | orchestrator | Wednesday 18 March 2026 05:02:03 +0000 (0:00:00.454) 0:18:35.509 ******* 2026-03-18 05:02:05.408231 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408241 | orchestrator | 2026-03-18 05:02:05.408253 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:02:05.408273 | orchestrator | Wednesday 18 March 2026 05:02:04 +0000 (0:00:00.155) 0:18:35.665 ******* 2026-03-18 05:02:05.408293 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408312 | orchestrator | 2026-03-18 05:02:05.408331 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:02:05.408350 | orchestrator | Wednesday 18 March 2026 05:02:04 +0000 (0:00:00.145) 0:18:35.810 ******* 2026-03-18 05:02:05.408369 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408388 | orchestrator | 2026-03-18 05:02:05.408408 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:02:05.408440 | orchestrator | Wednesday 18 March 2026 05:02:04 +0000 (0:00:00.171) 0:18:35.982 ******* 2026-03-18 05:02:05.408459 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:05.408478 | orchestrator | 2026-03-18 05:02:05.408496 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:02:05.408508 | orchestrator | Wednesday 18 March 2026 05:02:04 +0000 (0:00:00.152) 0:18:36.135 ******* 2026-03-18 05:02:05.408518 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:05.408529 | orchestrator | 2026-03-18 05:02:05.408540 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:02:05.408551 | orchestrator | Wednesday 18 March 2026 05:02:04 +0000 (0:00:00.142) 0:18:36.277 ******* 2026-03-18 05:02:05.408562 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:02:05.408583 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:02:12.876877 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:02:12.876990 | orchestrator | 2026-03-18 05:02:12.877006 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:02:12.877087 | orchestrator | Wednesday 18 March 2026 05:02:05 +0000 (0:00:00.734) 0:18:37.012 ******* 2026-03-18 05:02:12.877100 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:12.877112 | orchestrator | 2026-03-18 05:02:12.877124 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:02:12.877135 | orchestrator | Wednesday 18 March 2026 05:02:05 +0000 (0:00:00.263) 0:18:37.275 ******* 2026-03-18 05:02:12.877146 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:02:12.877158 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:02:12.877168 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:02:12.877179 | orchestrator | 2026-03-18 05:02:12.877191 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:02:12.877202 | orchestrator | Wednesday 18 March 2026 05:02:07 +0000 (0:00:02.214) 0:18:39.490 ******* 2026-03-18 05:02:12.877213 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:02:12.877225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:02:12.877236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:02:12.877247 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877258 | orchestrator | 2026-03-18 05:02:12.877270 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:02:12.877281 | orchestrator | Wednesday 18 March 2026 05:02:08 +0000 (0:00:00.420) 0:18:39.910 ******* 2026-03-18 05:02:12.877294 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877348 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877359 | orchestrator | 2026-03-18 05:02:12.877371 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:02:12.877381 | orchestrator | Wednesday 18 March 2026 05:02:09 +0000 (0:00:01.041) 0:18:40.951 ******* 2026-03-18 05:02:12.877395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877430 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:12.877453 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877463 | orchestrator | 2026-03-18 05:02:12.877474 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:02:12.877485 | orchestrator | Wednesday 18 March 2026 05:02:09 +0000 (0:00:00.175) 0:18:41.127 ******* 2026-03-18 05:02:12.877517 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:02:06.215416', 'end': '2026-03-18 05:02:06.267772', 'delta': '0:00:00.052356', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:02:12.877533 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:02:06.785497', 'end': '2026-03-18 05:02:06.841506', 'delta': '0:00:00.056009', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:02:12.877544 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:02:07.663884', 'end': '2026-03-18 05:02:07.706351', 'delta': '0:00:00.042467', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:02:12.877556 | orchestrator | 2026-03-18 05:02:12.877572 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:02:12.877591 | orchestrator | Wednesday 18 March 2026 05:02:10 +0000 (0:00:00.525) 0:18:41.653 ******* 2026-03-18 05:02:12.877602 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:12.877613 | orchestrator | 2026-03-18 05:02:12.877624 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:02:12.877635 | orchestrator | Wednesday 18 March 2026 05:02:10 +0000 (0:00:00.287) 0:18:41.940 ******* 2026-03-18 05:02:12.877646 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877657 | orchestrator | 2026-03-18 05:02:12.877667 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:02:12.877678 | orchestrator | Wednesday 18 March 2026 05:02:10 +0000 (0:00:00.287) 0:18:42.228 ******* 2026-03-18 05:02:12.877689 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:12.877700 | orchestrator | 2026-03-18 05:02:12.877710 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:02:12.877721 | orchestrator | Wednesday 18 March 2026 05:02:10 +0000 (0:00:00.147) 0:18:42.375 ******* 2026-03-18 05:02:12.877731 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:02:12.877742 | orchestrator | 2026-03-18 05:02:12.877753 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:02:12.877764 | orchestrator | Wednesday 18 March 2026 05:02:11 +0000 (0:00:00.975) 0:18:43.351 ******* 2026-03-18 05:02:12.877774 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:12.877785 | orchestrator | 2026-03-18 05:02:12.877796 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:02:12.877806 | orchestrator | Wednesday 18 March 2026 05:02:11 +0000 (0:00:00.156) 0:18:43.507 ******* 2026-03-18 05:02:12.877817 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877828 | orchestrator | 2026-03-18 05:02:12.877839 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:02:12.877850 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.135) 0:18:43.643 ******* 2026-03-18 05:02:12.877860 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877871 | orchestrator | 2026-03-18 05:02:12.877881 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:02:12.877892 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.251) 0:18:43.895 ******* 2026-03-18 05:02:12.877903 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877914 | orchestrator | 2026-03-18 05:02:12.877924 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:02:12.877935 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.153) 0:18:44.048 ******* 2026-03-18 05:02:12.877946 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.877956 | orchestrator | 2026-03-18 05:02:12.877967 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:02:12.877978 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.131) 0:18:44.180 ******* 2026-03-18 05:02:12.877989 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:12.878000 | orchestrator | 2026-03-18 05:02:12.878095 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:02:12.878112 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.170) 0:18:44.350 ******* 2026-03-18 05:02:12.878124 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:12.878135 | orchestrator | 2026-03-18 05:02:12.878154 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:02:13.955186 | orchestrator | Wednesday 18 March 2026 05:02:12 +0000 (0:00:00.135) 0:18:44.486 ******* 2026-03-18 05:02:13.955287 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:13.955302 | orchestrator | 2026-03-18 05:02:13.955315 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:02:13.955326 | orchestrator | Wednesday 18 March 2026 05:02:13 +0000 (0:00:00.187) 0:18:44.674 ******* 2026-03-18 05:02:13.955337 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:13.955349 | orchestrator | 2026-03-18 05:02:13.955360 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:02:13.955396 | orchestrator | Wednesday 18 March 2026 05:02:13 +0000 (0:00:00.443) 0:18:45.117 ******* 2026-03-18 05:02:13.955407 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:13.955418 | orchestrator | 2026-03-18 05:02:13.955428 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:02:13.955440 | orchestrator | Wednesday 18 March 2026 05:02:13 +0000 (0:00:00.188) 0:18:45.305 ******* 2026-03-18 05:02:13.955453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}})  2026-03-18 05:02:13.955499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:02:13.955513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}})  2026-03-18 05:02:13.955526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:02:13.955589 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}})  2026-03-18 05:02:13.955641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}})  2026-03-18 05:02:13.955652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:13.955690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:02:14.311132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:14.311213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:02:14.311225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:02:14.311235 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:14.311244 | orchestrator | 2026-03-18 05:02:14.311252 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:02:14.311259 | orchestrator | Wednesday 18 March 2026 05:02:14 +0000 (0:00:00.392) 0:18:45.698 ******* 2026-03-18 05:02:14.311267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311296 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311317 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311340 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311350 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311358 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:14.311393 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697621 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697636 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:02:15.697662 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:15.697676 | orchestrator | 2026-03-18 05:02:15.697688 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:02:15.697701 | orchestrator | Wednesday 18 March 2026 05:02:14 +0000 (0:00:00.401) 0:18:46.100 ******* 2026-03-18 05:02:15.697712 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:15.697724 | orchestrator | 2026-03-18 05:02:15.697736 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:02:15.697746 | orchestrator | Wednesday 18 March 2026 05:02:14 +0000 (0:00:00.505) 0:18:46.605 ******* 2026-03-18 05:02:15.697757 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:15.697768 | orchestrator | 2026-03-18 05:02:15.697785 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:02:15.697796 | orchestrator | Wednesday 18 March 2026 05:02:15 +0000 (0:00:00.196) 0:18:46.801 ******* 2026-03-18 05:02:15.697807 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:15.697818 | orchestrator | 2026-03-18 05:02:15.697829 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:02:15.697852 | orchestrator | Wednesday 18 March 2026 05:02:15 +0000 (0:00:00.504) 0:18:47.306 ******* 2026-03-18 05:02:30.773269 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.773403 | orchestrator | 2026-03-18 05:02:30.773422 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:02:30.773435 | orchestrator | Wednesday 18 March 2026 05:02:15 +0000 (0:00:00.131) 0:18:47.437 ******* 2026-03-18 05:02:30.773543 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.773559 | orchestrator | 2026-03-18 05:02:30.773571 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:02:30.773582 | orchestrator | Wednesday 18 March 2026 05:02:16 +0000 (0:00:00.266) 0:18:47.703 ******* 2026-03-18 05:02:30.773622 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.773639 | orchestrator | 2026-03-18 05:02:30.773660 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:02:30.773678 | orchestrator | Wednesday 18 March 2026 05:02:16 +0000 (0:00:00.149) 0:18:47.853 ******* 2026-03-18 05:02:30.773697 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 05:02:30.773715 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 05:02:30.773733 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 05:02:30.773749 | orchestrator | 2026-03-18 05:02:30.773768 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:02:30.773788 | orchestrator | Wednesday 18 March 2026 05:02:17 +0000 (0:00:00.990) 0:18:48.843 ******* 2026-03-18 05:02:30.773808 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:02:30.773829 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:02:30.773849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:02:30.773869 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.773888 | orchestrator | 2026-03-18 05:02:30.773909 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:02:30.773931 | orchestrator | Wednesday 18 March 2026 05:02:17 +0000 (0:00:00.170) 0:18:49.013 ******* 2026-03-18 05:02:30.773951 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-18 05:02:30.773966 | orchestrator | 2026-03-18 05:02:30.773980 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:02:30.773994 | orchestrator | Wednesday 18 March 2026 05:02:17 +0000 (0:00:00.527) 0:18:49.540 ******* 2026-03-18 05:02:30.774007 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774104 | orchestrator | 2026-03-18 05:02:30.774120 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:02:30.774131 | orchestrator | Wednesday 18 March 2026 05:02:18 +0000 (0:00:00.158) 0:18:49.699 ******* 2026-03-18 05:02:30.774142 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774153 | orchestrator | 2026-03-18 05:02:30.774164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:02:30.774175 | orchestrator | Wednesday 18 March 2026 05:02:18 +0000 (0:00:00.188) 0:18:49.887 ******* 2026-03-18 05:02:30.774185 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774196 | orchestrator | 2026-03-18 05:02:30.774207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:02:30.774218 | orchestrator | Wednesday 18 March 2026 05:02:18 +0000 (0:00:00.149) 0:18:50.036 ******* 2026-03-18 05:02:30.774229 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.774240 | orchestrator | 2026-03-18 05:02:30.774251 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:02:30.774261 | orchestrator | Wednesday 18 March 2026 05:02:18 +0000 (0:00:00.266) 0:18:50.303 ******* 2026-03-18 05:02:30.774272 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:02:30.774283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:02:30.774294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:02:30.774305 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774315 | orchestrator | 2026-03-18 05:02:30.774326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:02:30.774337 | orchestrator | Wednesday 18 March 2026 05:02:19 +0000 (0:00:00.431) 0:18:50.735 ******* 2026-03-18 05:02:30.774348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:02:30.774358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:02:30.774369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:02:30.774380 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774396 | orchestrator | 2026-03-18 05:02:30.774432 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:02:30.774449 | orchestrator | Wednesday 18 March 2026 05:02:19 +0000 (0:00:00.416) 0:18:51.152 ******* 2026-03-18 05:02:30.774466 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:02:30.774477 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:02:30.774489 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:02:30.774499 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.774510 | orchestrator | 2026-03-18 05:02:30.774521 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:02:30.774532 | orchestrator | Wednesday 18 March 2026 05:02:19 +0000 (0:00:00.395) 0:18:51.547 ******* 2026-03-18 05:02:30.774542 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.774553 | orchestrator | 2026-03-18 05:02:30.774564 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:02:30.774590 | orchestrator | Wednesday 18 March 2026 05:02:20 +0000 (0:00:00.171) 0:18:51.719 ******* 2026-03-18 05:02:30.774601 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:02:30.774611 | orchestrator | 2026-03-18 05:02:30.774622 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:02:30.774633 | orchestrator | Wednesday 18 March 2026 05:02:20 +0000 (0:00:00.340) 0:18:52.060 ******* 2026-03-18 05:02:30.774667 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:02:30.774678 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:02:30.774689 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:02:30.774700 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:02:30.774710 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:02:30.774721 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:02:30.774732 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:02:30.774743 | orchestrator | 2026-03-18 05:02:30.774753 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:02:30.774764 | orchestrator | Wednesday 18 March 2026 05:02:21 +0000 (0:00:01.254) 0:18:53.314 ******* 2026-03-18 05:02:30.774775 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:02:30.774785 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:02:30.774796 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:02:30.774806 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:02:30.774817 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:02:30.774827 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:02:30.774838 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:02:30.774849 | orchestrator | 2026-03-18 05:02:30.774859 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-18 05:02:30.774870 | orchestrator | Wednesday 18 March 2026 05:02:23 +0000 (0:00:01.711) 0:18:55.025 ******* 2026-03-18 05:02:30.774881 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.774892 | orchestrator | 2026-03-18 05:02:30.774902 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-18 05:02:30.774913 | orchestrator | Wednesday 18 March 2026 05:02:24 +0000 (0:00:00.803) 0:18:55.829 ******* 2026-03-18 05:02:30.774924 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.774935 | orchestrator | 2026-03-18 05:02:30.774945 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-18 05:02:30.774956 | orchestrator | Wednesday 18 March 2026 05:02:24 +0000 (0:00:00.155) 0:18:55.985 ******* 2026-03-18 05:02:30.774974 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.774985 | orchestrator | 2026-03-18 05:02:30.774995 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-18 05:02:30.775006 | orchestrator | Wednesday 18 March 2026 05:02:24 +0000 (0:00:00.242) 0:18:56.228 ******* 2026-03-18 05:02:30.775017 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-18 05:02:30.775056 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-18 05:02:30.775068 | orchestrator | 2026-03-18 05:02:30.775079 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:02:30.775090 | orchestrator | Wednesday 18 March 2026 05:02:27 +0000 (0:00:03.226) 0:18:59.454 ******* 2026-03-18 05:02:30.775101 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-18 05:02:30.775112 | orchestrator | 2026-03-18 05:02:30.775123 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:02:30.775133 | orchestrator | Wednesday 18 March 2026 05:02:28 +0000 (0:00:00.219) 0:18:59.674 ******* 2026-03-18 05:02:30.775144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-18 05:02:30.775155 | orchestrator | 2026-03-18 05:02:30.775166 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:02:30.775176 | orchestrator | Wednesday 18 March 2026 05:02:28 +0000 (0:00:00.212) 0:18:59.887 ******* 2026-03-18 05:02:30.775187 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.775198 | orchestrator | 2026-03-18 05:02:30.775208 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:02:30.775219 | orchestrator | Wednesday 18 March 2026 05:02:28 +0000 (0:00:00.131) 0:19:00.018 ******* 2026-03-18 05:02:30.775230 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.775241 | orchestrator | 2026-03-18 05:02:30.775251 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:02:30.775262 | orchestrator | Wednesday 18 March 2026 05:02:28 +0000 (0:00:00.516) 0:19:00.535 ******* 2026-03-18 05:02:30.775273 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.775284 | orchestrator | 2026-03-18 05:02:30.775294 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:02:30.775305 | orchestrator | Wednesday 18 March 2026 05:02:29 +0000 (0:00:00.559) 0:19:01.094 ******* 2026-03-18 05:02:30.775316 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:30.775326 | orchestrator | 2026-03-18 05:02:30.775337 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:02:30.775348 | orchestrator | Wednesday 18 March 2026 05:02:29 +0000 (0:00:00.521) 0:19:01.616 ******* 2026-03-18 05:02:30.775358 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.775369 | orchestrator | 2026-03-18 05:02:30.775380 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:02:30.775395 | orchestrator | Wednesday 18 March 2026 05:02:30 +0000 (0:00:00.139) 0:19:01.756 ******* 2026-03-18 05:02:30.775420 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.775440 | orchestrator | 2026-03-18 05:02:30.775458 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:02:30.775477 | orchestrator | Wednesday 18 March 2026 05:02:30 +0000 (0:00:00.479) 0:19:02.235 ******* 2026-03-18 05:02:30.775490 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:30.775500 | orchestrator | 2026-03-18 05:02:30.775519 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:02:42.229904 | orchestrator | Wednesday 18 March 2026 05:02:30 +0000 (0:00:00.139) 0:19:02.375 ******* 2026-03-18 05:02:42.230008 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230106 | orchestrator | 2026-03-18 05:02:42.230118 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:02:42.230127 | orchestrator | Wednesday 18 March 2026 05:02:31 +0000 (0:00:00.589) 0:19:02.965 ******* 2026-03-18 05:02:42.230136 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230166 | orchestrator | 2026-03-18 05:02:42.230176 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:02:42.230185 | orchestrator | Wednesday 18 March 2026 05:02:31 +0000 (0:00:00.522) 0:19:03.487 ******* 2026-03-18 05:02:42.230194 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230203 | orchestrator | 2026-03-18 05:02:42.230212 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:02:42.230221 | orchestrator | Wednesday 18 March 2026 05:02:31 +0000 (0:00:00.130) 0:19:03.618 ******* 2026-03-18 05:02:42.230229 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230238 | orchestrator | 2026-03-18 05:02:42.230247 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:02:42.230255 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.140) 0:19:03.759 ******* 2026-03-18 05:02:42.230264 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230273 | orchestrator | 2026-03-18 05:02:42.230281 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:02:42.230290 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.174) 0:19:03.933 ******* 2026-03-18 05:02:42.230299 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230307 | orchestrator | 2026-03-18 05:02:42.230316 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:02:42.230325 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.184) 0:19:04.118 ******* 2026-03-18 05:02:42.230333 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230342 | orchestrator | 2026-03-18 05:02:42.230350 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:02:42.230359 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.150) 0:19:04.269 ******* 2026-03-18 05:02:42.230368 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230376 | orchestrator | 2026-03-18 05:02:42.230386 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:02:42.230395 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.158) 0:19:04.427 ******* 2026-03-18 05:02:42.230403 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230412 | orchestrator | 2026-03-18 05:02:42.230420 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:02:42.230429 | orchestrator | Wednesday 18 March 2026 05:02:32 +0000 (0:00:00.139) 0:19:04.566 ******* 2026-03-18 05:02:42.230438 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230446 | orchestrator | 2026-03-18 05:02:42.230456 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:02:42.230467 | orchestrator | Wednesday 18 March 2026 05:02:33 +0000 (0:00:00.137) 0:19:04.704 ******* 2026-03-18 05:02:42.230477 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230486 | orchestrator | 2026-03-18 05:02:42.230497 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:02:42.230508 | orchestrator | Wednesday 18 March 2026 05:02:33 +0000 (0:00:00.164) 0:19:04.868 ******* 2026-03-18 05:02:42.230518 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.230528 | orchestrator | 2026-03-18 05:02:42.230539 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:02:42.230549 | orchestrator | Wednesday 18 March 2026 05:02:33 +0000 (0:00:00.531) 0:19:05.399 ******* 2026-03-18 05:02:42.230559 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230569 | orchestrator | 2026-03-18 05:02:42.230580 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:02:42.230590 | orchestrator | Wednesday 18 March 2026 05:02:33 +0000 (0:00:00.152) 0:19:05.552 ******* 2026-03-18 05:02:42.230601 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230610 | orchestrator | 2026-03-18 05:02:42.230620 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:02:42.230631 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.143) 0:19:05.695 ******* 2026-03-18 05:02:42.230640 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230657 | orchestrator | 2026-03-18 05:02:42.230668 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:02:42.230678 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.148) 0:19:05.844 ******* 2026-03-18 05:02:42.230688 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230697 | orchestrator | 2026-03-18 05:02:42.230707 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:02:42.230718 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.133) 0:19:05.977 ******* 2026-03-18 05:02:42.230728 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230738 | orchestrator | 2026-03-18 05:02:42.230748 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:02:42.230757 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.142) 0:19:06.119 ******* 2026-03-18 05:02:42.230767 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230778 | orchestrator | 2026-03-18 05:02:42.230787 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:02:42.230797 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.137) 0:19:06.258 ******* 2026-03-18 05:02:42.230808 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230818 | orchestrator | 2026-03-18 05:02:42.230842 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:02:42.230852 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.135) 0:19:06.393 ******* 2026-03-18 05:02:42.230861 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230869 | orchestrator | 2026-03-18 05:02:42.230878 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:02:42.230887 | orchestrator | Wednesday 18 March 2026 05:02:34 +0000 (0:00:00.164) 0:19:06.558 ******* 2026-03-18 05:02:42.230917 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230936 | orchestrator | 2026-03-18 05:02:42.230950 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:02:42.230959 | orchestrator | Wednesday 18 March 2026 05:02:35 +0000 (0:00:00.154) 0:19:06.713 ******* 2026-03-18 05:02:42.230967 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.230976 | orchestrator | 2026-03-18 05:02:42.230984 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:02:42.230993 | orchestrator | Wednesday 18 March 2026 05:02:35 +0000 (0:00:00.149) 0:19:06.862 ******* 2026-03-18 05:02:42.231001 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231010 | orchestrator | 2026-03-18 05:02:42.231018 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:02:42.231027 | orchestrator | Wednesday 18 March 2026 05:02:35 +0000 (0:00:00.135) 0:19:06.998 ******* 2026-03-18 05:02:42.231053 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231062 | orchestrator | 2026-03-18 05:02:42.231071 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:02:42.231080 | orchestrator | Wednesday 18 March 2026 05:02:35 +0000 (0:00:00.534) 0:19:07.532 ******* 2026-03-18 05:02:42.231088 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.231097 | orchestrator | 2026-03-18 05:02:42.231106 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:02:42.231114 | orchestrator | Wednesday 18 March 2026 05:02:36 +0000 (0:00:00.976) 0:19:08.509 ******* 2026-03-18 05:02:42.231123 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.231131 | orchestrator | 2026-03-18 05:02:42.231140 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:02:42.231148 | orchestrator | Wednesday 18 March 2026 05:02:38 +0000 (0:00:01.215) 0:19:09.724 ******* 2026-03-18 05:02:42.231157 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-18 05:02:42.231167 | orchestrator | 2026-03-18 05:02:42.231176 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:02:42.231184 | orchestrator | Wednesday 18 March 2026 05:02:38 +0000 (0:00:00.227) 0:19:09.952 ******* 2026-03-18 05:02:42.231200 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231208 | orchestrator | 2026-03-18 05:02:42.231217 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:02:42.231225 | orchestrator | Wednesday 18 March 2026 05:02:38 +0000 (0:00:00.171) 0:19:10.124 ******* 2026-03-18 05:02:42.231234 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231243 | orchestrator | 2026-03-18 05:02:42.231251 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:02:42.231260 | orchestrator | Wednesday 18 March 2026 05:02:38 +0000 (0:00:00.167) 0:19:10.292 ******* 2026-03-18 05:02:42.231269 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:02:42.231278 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:02:42.231287 | orchestrator | 2026-03-18 05:02:42.231295 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:02:42.231304 | orchestrator | Wednesday 18 March 2026 05:02:39 +0000 (0:00:00.853) 0:19:11.146 ******* 2026-03-18 05:02:42.231313 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.231321 | orchestrator | 2026-03-18 05:02:42.231330 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:02:42.231338 | orchestrator | Wednesday 18 March 2026 05:02:39 +0000 (0:00:00.458) 0:19:11.604 ******* 2026-03-18 05:02:42.231347 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231356 | orchestrator | 2026-03-18 05:02:42.231364 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:02:42.231373 | orchestrator | Wednesday 18 March 2026 05:02:40 +0000 (0:00:00.159) 0:19:11.763 ******* 2026-03-18 05:02:42.231382 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231390 | orchestrator | 2026-03-18 05:02:42.231399 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:02:42.231408 | orchestrator | Wednesday 18 March 2026 05:02:40 +0000 (0:00:00.164) 0:19:11.928 ******* 2026-03-18 05:02:42.231416 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231425 | orchestrator | 2026-03-18 05:02:42.231433 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:02:42.231442 | orchestrator | Wednesday 18 March 2026 05:02:40 +0000 (0:00:00.136) 0:19:12.065 ******* 2026-03-18 05:02:42.231451 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-18 05:02:42.231459 | orchestrator | 2026-03-18 05:02:42.231468 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:02:42.231476 | orchestrator | Wednesday 18 March 2026 05:02:40 +0000 (0:00:00.551) 0:19:12.617 ******* 2026-03-18 05:02:42.231485 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:02:42.231494 | orchestrator | 2026-03-18 05:02:42.231502 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:02:42.231511 | orchestrator | Wednesday 18 March 2026 05:02:41 +0000 (0:00:00.706) 0:19:13.323 ******* 2026-03-18 05:02:42.231519 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:02:42.231528 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:02:42.231536 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:02:42.231545 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231554 | orchestrator | 2026-03-18 05:02:42.231567 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:02:42.231576 | orchestrator | Wednesday 18 March 2026 05:02:41 +0000 (0:00:00.163) 0:19:13.486 ******* 2026-03-18 05:02:42.231584 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:02:42.231593 | orchestrator | 2026-03-18 05:02:42.231601 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:02:42.231610 | orchestrator | Wednesday 18 March 2026 05:02:42 +0000 (0:00:00.154) 0:19:13.641 ******* 2026-03-18 05:02:42.231631 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558247 | orchestrator | 2026-03-18 05:03:00.558359 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:03:00.558373 | orchestrator | Wednesday 18 March 2026 05:02:42 +0000 (0:00:00.195) 0:19:13.836 ******* 2026-03-18 05:03:00.558385 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558396 | orchestrator | 2026-03-18 05:03:00.558407 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:03:00.558417 | orchestrator | Wednesday 18 March 2026 05:02:42 +0000 (0:00:00.184) 0:19:14.021 ******* 2026-03-18 05:03:00.558427 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558436 | orchestrator | 2026-03-18 05:03:00.558447 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:03:00.558457 | orchestrator | Wednesday 18 March 2026 05:02:42 +0000 (0:00:00.171) 0:19:14.192 ******* 2026-03-18 05:03:00.558466 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558476 | orchestrator | 2026-03-18 05:03:00.558486 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:03:00.558495 | orchestrator | Wednesday 18 March 2026 05:02:42 +0000 (0:00:00.159) 0:19:14.352 ******* 2026-03-18 05:03:00.558505 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:00.558516 | orchestrator | 2026-03-18 05:03:00.558526 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:03:00.558536 | orchestrator | Wednesday 18 March 2026 05:02:44 +0000 (0:00:01.522) 0:19:15.874 ******* 2026-03-18 05:03:00.558546 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:00.558556 | orchestrator | 2026-03-18 05:03:00.558566 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:03:00.558575 | orchestrator | Wednesday 18 March 2026 05:02:44 +0000 (0:00:00.155) 0:19:16.030 ******* 2026-03-18 05:03:00.558585 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-18 05:03:00.558595 | orchestrator | 2026-03-18 05:03:00.558605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:03:00.558615 | orchestrator | Wednesday 18 March 2026 05:02:44 +0000 (0:00:00.232) 0:19:16.262 ******* 2026-03-18 05:03:00.558624 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558634 | orchestrator | 2026-03-18 05:03:00.558643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:03:00.558653 | orchestrator | Wednesday 18 March 2026 05:02:44 +0000 (0:00:00.180) 0:19:16.443 ******* 2026-03-18 05:03:00.558663 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558672 | orchestrator | 2026-03-18 05:03:00.558682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:03:00.558692 | orchestrator | Wednesday 18 March 2026 05:02:45 +0000 (0:00:00.581) 0:19:17.025 ******* 2026-03-18 05:03:00.558701 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558711 | orchestrator | 2026-03-18 05:03:00.558721 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:03:00.558730 | orchestrator | Wednesday 18 March 2026 05:02:45 +0000 (0:00:00.162) 0:19:17.188 ******* 2026-03-18 05:03:00.558740 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558750 | orchestrator | 2026-03-18 05:03:00.558760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:03:00.558770 | orchestrator | Wednesday 18 March 2026 05:02:45 +0000 (0:00:00.152) 0:19:17.341 ******* 2026-03-18 05:03:00.558779 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558789 | orchestrator | 2026-03-18 05:03:00.558799 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:03:00.558811 | orchestrator | Wednesday 18 March 2026 05:02:45 +0000 (0:00:00.159) 0:19:17.500 ******* 2026-03-18 05:03:00.558822 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558835 | orchestrator | 2026-03-18 05:03:00.558847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:03:00.558859 | orchestrator | Wednesday 18 March 2026 05:02:46 +0000 (0:00:00.165) 0:19:17.665 ******* 2026-03-18 05:03:00.558898 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558910 | orchestrator | 2026-03-18 05:03:00.558921 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:03:00.558933 | orchestrator | Wednesday 18 March 2026 05:02:46 +0000 (0:00:00.171) 0:19:17.837 ******* 2026-03-18 05:03:00.558944 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.558955 | orchestrator | 2026-03-18 05:03:00.558967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:03:00.558978 | orchestrator | Wednesday 18 March 2026 05:02:46 +0000 (0:00:00.162) 0:19:17.999 ******* 2026-03-18 05:03:00.558989 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:00.559001 | orchestrator | 2026-03-18 05:03:00.559013 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:03:00.559024 | orchestrator | Wednesday 18 March 2026 05:02:46 +0000 (0:00:00.237) 0:19:18.236 ******* 2026-03-18 05:03:00.559035 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-18 05:03:00.559102 | orchestrator | 2026-03-18 05:03:00.559116 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:03:00.559129 | orchestrator | Wednesday 18 March 2026 05:02:46 +0000 (0:00:00.226) 0:19:18.463 ******* 2026-03-18 05:03:00.559141 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-18 05:03:00.559152 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-18 05:03:00.559161 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-18 05:03:00.559171 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-18 05:03:00.559195 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-18 05:03:00.559205 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-18 05:03:00.559215 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-18 05:03:00.559224 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:03:00.559235 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:03:00.559262 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:03:00.559272 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:03:00.559282 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:03:00.559292 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:03:00.559301 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:03:00.559311 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-18 05:03:00.559321 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-18 05:03:00.559330 | orchestrator | 2026-03-18 05:03:00.559340 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:03:00.559350 | orchestrator | Wednesday 18 March 2026 05:02:52 +0000 (0:00:05.423) 0:19:23.887 ******* 2026-03-18 05:03:00.559359 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-18 05:03:00.559369 | orchestrator | 2026-03-18 05:03:00.559379 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:03:00.559388 | orchestrator | Wednesday 18 March 2026 05:02:52 +0000 (0:00:00.514) 0:19:24.402 ******* 2026-03-18 05:03:00.559398 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:03:00.559409 | orchestrator | 2026-03-18 05:03:00.559419 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:03:00.559429 | orchestrator | Wednesday 18 March 2026 05:02:53 +0000 (0:00:00.533) 0:19:24.935 ******* 2026-03-18 05:03:00.559439 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:03:00.559458 | orchestrator | 2026-03-18 05:03:00.559468 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:03:00.559478 | orchestrator | Wednesday 18 March 2026 05:02:54 +0000 (0:00:00.994) 0:19:25.929 ******* 2026-03-18 05:03:00.559488 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559497 | orchestrator | 2026-03-18 05:03:00.559507 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:03:00.559516 | orchestrator | Wednesday 18 March 2026 05:02:54 +0000 (0:00:00.145) 0:19:26.075 ******* 2026-03-18 05:03:00.559526 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559535 | orchestrator | 2026-03-18 05:03:00.559545 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:03:00.559555 | orchestrator | Wednesday 18 March 2026 05:02:54 +0000 (0:00:00.175) 0:19:26.251 ******* 2026-03-18 05:03:00.559564 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559574 | orchestrator | 2026-03-18 05:03:00.559584 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:03:00.559593 | orchestrator | Wednesday 18 March 2026 05:02:54 +0000 (0:00:00.175) 0:19:26.426 ******* 2026-03-18 05:03:00.559603 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559612 | orchestrator | 2026-03-18 05:03:00.559622 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:03:00.559632 | orchestrator | Wednesday 18 March 2026 05:02:54 +0000 (0:00:00.147) 0:19:26.574 ******* 2026-03-18 05:03:00.559641 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559651 | orchestrator | 2026-03-18 05:03:00.559661 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:03:00.559671 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.159) 0:19:26.733 ******* 2026-03-18 05:03:00.559680 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559690 | orchestrator | 2026-03-18 05:03:00.559699 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:03:00.559709 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.152) 0:19:26.885 ******* 2026-03-18 05:03:00.559719 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559729 | orchestrator | 2026-03-18 05:03:00.559738 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:03:00.559748 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.163) 0:19:27.049 ******* 2026-03-18 05:03:00.559758 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559767 | orchestrator | 2026-03-18 05:03:00.559777 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:03:00.559787 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.145) 0:19:27.194 ******* 2026-03-18 05:03:00.559796 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559806 | orchestrator | 2026-03-18 05:03:00.559816 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:03:00.559825 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.139) 0:19:27.333 ******* 2026-03-18 05:03:00.559835 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:00.559844 | orchestrator | 2026-03-18 05:03:00.559854 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:03:00.559864 | orchestrator | Wednesday 18 March 2026 05:02:55 +0000 (0:00:00.146) 0:19:27.479 ******* 2026-03-18 05:03:00.559873 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:00.559883 | orchestrator | 2026-03-18 05:03:00.559893 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:03:00.559903 | orchestrator | Wednesday 18 March 2026 05:02:56 +0000 (0:00:00.198) 0:19:27.678 ******* 2026-03-18 05:03:00.559917 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:03:00.559927 | orchestrator | 2026-03-18 05:03:00.559937 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:03:00.559947 | orchestrator | Wednesday 18 March 2026 05:03:00 +0000 (0:00:04.279) 0:19:31.957 ******* 2026-03-18 05:03:00.559968 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:03:22.321565 | orchestrator | 2026-03-18 05:03:22.321682 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:03:22.321701 | orchestrator | Wednesday 18 March 2026 05:03:00 +0000 (0:00:00.205) 0:19:32.163 ******* 2026-03-18 05:03:22.321716 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-18 05:03:22.321731 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-18 05:03:22.321744 | orchestrator | 2026-03-18 05:03:22.321755 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:03:22.321767 | orchestrator | Wednesday 18 March 2026 05:03:07 +0000 (0:00:06.800) 0:19:38.964 ******* 2026-03-18 05:03:22.321778 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.321790 | orchestrator | 2026-03-18 05:03:22.321801 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:03:22.321812 | orchestrator | Wednesday 18 March 2026 05:03:07 +0000 (0:00:00.145) 0:19:39.109 ******* 2026-03-18 05:03:22.321823 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.321834 | orchestrator | 2026-03-18 05:03:22.321845 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:03:22.321858 | orchestrator | Wednesday 18 March 2026 05:03:07 +0000 (0:00:00.147) 0:19:39.256 ******* 2026-03-18 05:03:22.321869 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.321880 | orchestrator | 2026-03-18 05:03:22.321890 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:03:22.321901 | orchestrator | Wednesday 18 March 2026 05:03:07 +0000 (0:00:00.167) 0:19:39.424 ******* 2026-03-18 05:03:22.321912 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.321923 | orchestrator | 2026-03-18 05:03:22.321934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:03:22.321945 | orchestrator | Wednesday 18 March 2026 05:03:07 +0000 (0:00:00.181) 0:19:39.605 ******* 2026-03-18 05:03:22.321956 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.321967 | orchestrator | 2026-03-18 05:03:22.321977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:03:22.321988 | orchestrator | Wednesday 18 March 2026 05:03:08 +0000 (0:00:00.163) 0:19:39.769 ******* 2026-03-18 05:03:22.321999 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322011 | orchestrator | 2026-03-18 05:03:22.322121 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:03:22.322136 | orchestrator | Wednesday 18 March 2026 05:03:08 +0000 (0:00:00.250) 0:19:40.019 ******* 2026-03-18 05:03:22.322148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:03:22.322161 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:03:22.322174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:03:22.322186 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.322199 | orchestrator | 2026-03-18 05:03:22.322211 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:03:22.322223 | orchestrator | Wednesday 18 March 2026 05:03:08 +0000 (0:00:00.438) 0:19:40.458 ******* 2026-03-18 05:03:22.322235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:03:22.322272 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:03:22.322285 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:03:22.322297 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.322309 | orchestrator | 2026-03-18 05:03:22.322323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:03:22.322335 | orchestrator | Wednesday 18 March 2026 05:03:09 +0000 (0:00:00.425) 0:19:40.884 ******* 2026-03-18 05:03:22.322348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:03:22.322360 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:03:22.322373 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:03:22.322384 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.322394 | orchestrator | 2026-03-18 05:03:22.322405 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:03:22.322416 | orchestrator | Wednesday 18 March 2026 05:03:10 +0000 (0:00:00.787) 0:19:41.671 ******* 2026-03-18 05:03:22.322426 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322437 | orchestrator | 2026-03-18 05:03:22.322448 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:03:22.322458 | orchestrator | Wednesday 18 March 2026 05:03:10 +0000 (0:00:00.190) 0:19:41.861 ******* 2026-03-18 05:03:22.322484 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:03:22.322495 | orchestrator | 2026-03-18 05:03:22.322534 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:03:22.322545 | orchestrator | Wednesday 18 March 2026 05:03:11 +0000 (0:00:01.140) 0:19:43.002 ******* 2026-03-18 05:03:22.322556 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322567 | orchestrator | 2026-03-18 05:03:22.322578 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-18 05:03:22.322588 | orchestrator | Wednesday 18 March 2026 05:03:12 +0000 (0:00:00.828) 0:19:43.831 ******* 2026-03-18 05:03:22.322599 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322610 | orchestrator | 2026-03-18 05:03:22.322638 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-18 05:03:22.322650 | orchestrator | Wednesday 18 March 2026 05:03:12 +0000 (0:00:00.156) 0:19:43.987 ******* 2026-03-18 05:03:22.322661 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:03:22.322672 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:03:22.322682 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:03:22.322693 | orchestrator | 2026-03-18 05:03:22.322704 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-18 05:03:22.322714 | orchestrator | Wednesday 18 March 2026 05:03:13 +0000 (0:00:00.695) 0:19:44.683 ******* 2026-03-18 05:03:22.322725 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-18 05:03:22.322735 | orchestrator | 2026-03-18 05:03:22.322746 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-18 05:03:22.322757 | orchestrator | Wednesday 18 March 2026 05:03:13 +0000 (0:00:00.207) 0:19:44.890 ******* 2026-03-18 05:03:22.322767 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.322778 | orchestrator | 2026-03-18 05:03:22.322789 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-18 05:03:22.322799 | orchestrator | Wednesday 18 March 2026 05:03:13 +0000 (0:00:00.143) 0:19:45.034 ******* 2026-03-18 05:03:22.322810 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.322820 | orchestrator | 2026-03-18 05:03:22.322831 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-18 05:03:22.322842 | orchestrator | Wednesday 18 March 2026 05:03:13 +0000 (0:00:00.126) 0:19:45.160 ******* 2026-03-18 05:03:22.322852 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322863 | orchestrator | 2026-03-18 05:03:22.322883 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-18 05:03:22.322894 | orchestrator | Wednesday 18 March 2026 05:03:13 +0000 (0:00:00.446) 0:19:45.607 ******* 2026-03-18 05:03:22.322905 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.322915 | orchestrator | 2026-03-18 05:03:22.322926 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-18 05:03:22.322937 | orchestrator | Wednesday 18 March 2026 05:03:14 +0000 (0:00:00.174) 0:19:45.781 ******* 2026-03-18 05:03:22.322947 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-18 05:03:22.322958 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-18 05:03:22.322969 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-18 05:03:22.322980 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-18 05:03:22.322990 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-18 05:03:22.323001 | orchestrator | 2026-03-18 05:03:22.323011 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-18 05:03:22.323022 | orchestrator | Wednesday 18 March 2026 05:03:17 +0000 (0:00:02.867) 0:19:48.648 ******* 2026-03-18 05:03:22.323033 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.323043 | orchestrator | 2026-03-18 05:03:22.323054 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-18 05:03:22.323105 | orchestrator | Wednesday 18 March 2026 05:03:17 +0000 (0:00:00.421) 0:19:49.070 ******* 2026-03-18 05:03:22.323119 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-18 05:03:22.323130 | orchestrator | 2026-03-18 05:03:22.323140 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-18 05:03:22.323151 | orchestrator | Wednesday 18 March 2026 05:03:17 +0000 (0:00:00.212) 0:19:49.283 ******* 2026-03-18 05:03:22.323161 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-18 05:03:22.323172 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-18 05:03:22.323183 | orchestrator | 2026-03-18 05:03:22.323193 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-18 05:03:22.323204 | orchestrator | Wednesday 18 March 2026 05:03:18 +0000 (0:00:00.834) 0:19:50.118 ******* 2026-03-18 05:03:22.323215 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:03:22.323225 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:03:22.323236 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:03:22.323247 | orchestrator | 2026-03-18 05:03:22.323257 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:03:22.323268 | orchestrator | Wednesday 18 March 2026 05:03:20 +0000 (0:00:02.268) 0:19:52.387 ******* 2026-03-18 05:03:22.323278 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-18 05:03:22.323289 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:03:22.323300 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:03:22.323310 | orchestrator | 2026-03-18 05:03:22.323321 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-18 05:03:22.323331 | orchestrator | Wednesday 18 March 2026 05:03:21 +0000 (0:00:00.985) 0:19:53.372 ******* 2026-03-18 05:03:22.323342 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.323353 | orchestrator | 2026-03-18 05:03:22.323369 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-18 05:03:22.323380 | orchestrator | Wednesday 18 March 2026 05:03:22 +0000 (0:00:00.252) 0:19:53.625 ******* 2026-03-18 05:03:22.323390 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.323401 | orchestrator | 2026-03-18 05:03:22.323416 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-18 05:03:22.323427 | orchestrator | Wednesday 18 March 2026 05:03:22 +0000 (0:00:00.147) 0:19:53.772 ******* 2026-03-18 05:03:22.323445 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:03:22.323456 | orchestrator | 2026-03-18 05:03:22.323473 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-18 05:05:20.726002 | orchestrator | Wednesday 18 March 2026 05:03:22 +0000 (0:00:00.152) 0:19:53.925 ******* 2026-03-18 05:05:20.726234 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-18 05:05:20.726254 | orchestrator | 2026-03-18 05:05:20.726267 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-18 05:05:20.726278 | orchestrator | Wednesday 18 March 2026 05:03:22 +0000 (0:00:00.239) 0:19:54.165 ******* 2026-03-18 05:05:20.726289 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.726301 | orchestrator | 2026-03-18 05:05:20.726313 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-18 05:05:20.726325 | orchestrator | Wednesday 18 March 2026 05:03:23 +0000 (0:00:00.474) 0:19:54.639 ******* 2026-03-18 05:05:20.726336 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.726347 | orchestrator | 2026-03-18 05:05:20.726358 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-18 05:05:20.726369 | orchestrator | Wednesday 18 March 2026 05:03:25 +0000 (0:00:02.311) 0:19:56.952 ******* 2026-03-18 05:05:20.726380 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-18 05:05:20.726391 | orchestrator | 2026-03-18 05:05:20.726402 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-18 05:05:20.726412 | orchestrator | Wednesday 18 March 2026 05:03:25 +0000 (0:00:00.529) 0:19:57.481 ******* 2026-03-18 05:05:20.726423 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.726434 | orchestrator | 2026-03-18 05:05:20.726445 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-18 05:05:20.726456 | orchestrator | Wednesday 18 March 2026 05:03:26 +0000 (0:00:00.977) 0:19:58.458 ******* 2026-03-18 05:05:20.726467 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.726477 | orchestrator | 2026-03-18 05:05:20.726488 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-18 05:05:20.726499 | orchestrator | Wednesday 18 March 2026 05:03:27 +0000 (0:00:00.920) 0:19:59.379 ******* 2026-03-18 05:05:20.726533 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.726546 | orchestrator | 2026-03-18 05:05:20.726559 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-18 05:05:20.726572 | orchestrator | Wednesday 18 March 2026 05:03:29 +0000 (0:00:01.250) 0:20:00.630 ******* 2026-03-18 05:05:20.726585 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.726599 | orchestrator | 2026-03-18 05:05:20.726611 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-18 05:05:20.726624 | orchestrator | Wednesday 18 March 2026 05:03:29 +0000 (0:00:00.157) 0:20:00.787 ******* 2026-03-18 05:05:20.726636 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.726649 | orchestrator | 2026-03-18 05:05:20.726661 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-18 05:05:20.726674 | orchestrator | Wednesday 18 March 2026 05:03:29 +0000 (0:00:00.152) 0:20:00.940 ******* 2026-03-18 05:05:20.726687 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-18 05:05:20.726701 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-18 05:05:20.726714 | orchestrator | 2026-03-18 05:05:20.726727 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-18 05:05:20.726740 | orchestrator | Wednesday 18 March 2026 05:03:30 +0000 (0:00:00.788) 0:20:01.729 ******* 2026-03-18 05:05:20.726753 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-18 05:05:20.726765 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-18 05:05:20.726778 | orchestrator | 2026-03-18 05:05:20.726792 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-18 05:05:20.726805 | orchestrator | Wednesday 18 March 2026 05:03:32 +0000 (0:00:01.995) 0:20:03.725 ******* 2026-03-18 05:05:20.726818 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-18 05:05:20.726854 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-18 05:05:20.726867 | orchestrator | 2026-03-18 05:05:20.726879 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-18 05:05:20.726892 | orchestrator | Wednesday 18 March 2026 05:03:35 +0000 (0:00:03.606) 0:20:07.331 ******* 2026-03-18 05:05:20.726903 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.726914 | orchestrator | 2026-03-18 05:05:20.726925 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-18 05:05:20.726936 | orchestrator | Wednesday 18 March 2026 05:03:35 +0000 (0:00:00.230) 0:20:07.562 ******* 2026-03-18 05:05:20.726947 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-18 05:05:20.726959 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:05:20.726970 | orchestrator | 2026-03-18 05:05:20.726981 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-18 05:05:20.726992 | orchestrator | Wednesday 18 March 2026 05:03:48 +0000 (0:00:12.329) 0:20:19.892 ******* 2026-03-18 05:05:20.727003 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.727014 | orchestrator | 2026-03-18 05:05:20.727025 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-18 05:05:20.727036 | orchestrator | Wednesday 18 March 2026 05:03:48 +0000 (0:00:00.329) 0:20:20.221 ******* 2026-03-18 05:05:20.727047 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.727057 | orchestrator | 2026-03-18 05:05:20.727068 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-18 05:05:20.727079 | orchestrator | Wednesday 18 March 2026 05:03:49 +0000 (0:00:00.465) 0:20:20.687 ******* 2026-03-18 05:05:20.727104 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:05:20.727115 | orchestrator | 2026-03-18 05:05:20.727126 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-18 05:05:20.727137 | orchestrator | Wednesday 18 March 2026 05:03:49 +0000 (0:00:00.140) 0:20:20.828 ******* 2026-03-18 05:05:20.727167 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-18 05:05:20.727180 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-18 05:05:20.727209 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:05:20.727220 | orchestrator | 2026-03-18 05:05:20.727231 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-18 05:05:20.727242 | orchestrator | 2026-03-18 05:05:20.727253 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:05:20.727264 | orchestrator | Wednesday 18 March 2026 05:03:57 +0000 (0:00:07.810) 0:20:28.638 ******* 2026-03-18 05:05:20.727275 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:05:20.727287 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:05:20.727306 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.727327 | orchestrator | 2026-03-18 05:05:20.727346 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:05:20.727365 | orchestrator | Wednesday 18 March 2026 05:03:57 +0000 (0:00:00.686) 0:20:29.325 ******* 2026-03-18 05:05:20.727377 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:05:20.727388 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:05:20.727399 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:05:20.727409 | orchestrator | 2026-03-18 05:05:20.727420 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-18 05:05:20.727431 | orchestrator | Wednesday 18 March 2026 05:03:58 +0000 (0:00:00.870) 0:20:30.195 ******* 2026-03-18 05:05:20.727442 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-18 05:05:20.727453 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-18 05:05:20.727465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-18 05:05:20.727485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-18 05:05:20.727498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-18 05:05:20.727509 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-18 05:05:20.727520 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-18 05:05:20.727530 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-18 05:05:20.727541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-18 05:05:20.727552 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-18 05:05:20.727563 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-18 05:05:20.727574 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-18 05:05:20.727585 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-18 05:05:20.727595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-18 05:05:20.727606 | orchestrator | 2026-03-18 05:05:20.727617 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-18 05:05:20.727628 | orchestrator | Wednesday 18 March 2026 05:05:11 +0000 (0:01:13.020) 0:21:43.216 ******* 2026-03-18 05:05:20.727639 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-18 05:05:20.727649 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-18 05:05:20.727660 | orchestrator | 2026-03-18 05:05:20.727671 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-18 05:05:20.727682 | orchestrator | Wednesday 18 March 2026 05:05:16 +0000 (0:00:04.956) 0:21:48.173 ******* 2026-03-18 05:05:20.727692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:05:20.727703 | orchestrator | 2026-03-18 05:05:20.727714 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-18 05:05:20.727724 | orchestrator | 2026-03-18 05:05:20.727735 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:05:20.727746 | orchestrator | Wednesday 18 March 2026 05:05:19 +0000 (0:00:02.509) 0:21:50.683 ******* 2026-03-18 05:05:20.727757 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-18 05:05:20.727767 | orchestrator | 2026-03-18 05:05:20.727778 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:05:20.727789 | orchestrator | Wednesday 18 March 2026 05:05:19 +0000 (0:00:00.290) 0:21:50.974 ******* 2026-03-18 05:05:20.727800 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:20.727811 | orchestrator | 2026-03-18 05:05:20.727821 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:05:20.727832 | orchestrator | Wednesday 18 March 2026 05:05:19 +0000 (0:00:00.529) 0:21:51.503 ******* 2026-03-18 05:05:20.727843 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:20.727854 | orchestrator | 2026-03-18 05:05:20.727870 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:05:20.727881 | orchestrator | Wednesday 18 March 2026 05:05:20 +0000 (0:00:00.160) 0:21:51.664 ******* 2026-03-18 05:05:20.727892 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:20.727903 | orchestrator | 2026-03-18 05:05:20.727914 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:05:20.727924 | orchestrator | Wednesday 18 March 2026 05:05:20 +0000 (0:00:00.496) 0:21:52.160 ******* 2026-03-18 05:05:20.727941 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:20.727952 | orchestrator | 2026-03-18 05:05:20.727969 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:05:29.036079 | orchestrator | Wednesday 18 March 2026 05:05:20 +0000 (0:00:00.169) 0:21:52.330 ******* 2026-03-18 05:05:29.036324 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.036359 | orchestrator | 2026-03-18 05:05:29.036381 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:05:29.036399 | orchestrator | Wednesday 18 March 2026 05:05:20 +0000 (0:00:00.157) 0:21:52.488 ******* 2026-03-18 05:05:29.036418 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.036436 | orchestrator | 2026-03-18 05:05:29.036455 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:05:29.036474 | orchestrator | Wednesday 18 March 2026 05:05:21 +0000 (0:00:00.464) 0:21:52.952 ******* 2026-03-18 05:05:29.036492 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.036511 | orchestrator | 2026-03-18 05:05:29.036531 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:05:29.036550 | orchestrator | Wednesday 18 March 2026 05:05:21 +0000 (0:00:00.178) 0:21:53.131 ******* 2026-03-18 05:05:29.036570 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.036590 | orchestrator | 2026-03-18 05:05:29.036608 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:05:29.036627 | orchestrator | Wednesday 18 March 2026 05:05:21 +0000 (0:00:00.191) 0:21:53.322 ******* 2026-03-18 05:05:29.036645 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 05:05:29.036664 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:05:29.036682 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:05:29.036700 | orchestrator | 2026-03-18 05:05:29.036718 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:05:29.036736 | orchestrator | Wednesday 18 March 2026 05:05:22 +0000 (0:00:00.719) 0:21:54.042 ******* 2026-03-18 05:05:29.036755 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.036772 | orchestrator | 2026-03-18 05:05:29.036791 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:05:29.036810 | orchestrator | Wednesday 18 March 2026 05:05:22 +0000 (0:00:00.276) 0:21:54.319 ******* 2026-03-18 05:05:29.036829 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 05:05:29.036848 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:05:29.036868 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:05:29.036886 | orchestrator | 2026-03-18 05:05:29.036905 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:05:29.036924 | orchestrator | Wednesday 18 March 2026 05:05:24 +0000 (0:00:01.938) 0:21:56.257 ******* 2026-03-18 05:05:29.036943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 05:05:29.036962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 05:05:29.036982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 05:05:29.037001 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.037020 | orchestrator | 2026-03-18 05:05:29.037039 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:05:29.037057 | orchestrator | Wednesday 18 March 2026 05:05:25 +0000 (0:00:00.474) 0:21:56.732 ******* 2026-03-18 05:05:29.037079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037148 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037210 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.037229 | orchestrator | 2026-03-18 05:05:29.037248 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:05:29.037266 | orchestrator | Wednesday 18 March 2026 05:05:25 +0000 (0:00:00.668) 0:21:57.400 ******* 2026-03-18 05:05:29.037288 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037329 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037375 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:29.037394 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.037413 | orchestrator | 2026-03-18 05:05:29.037432 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:05:29.037451 | orchestrator | Wednesday 18 March 2026 05:05:26 +0000 (0:00:00.225) 0:21:57.626 ******* 2026-03-18 05:05:29.037472 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:05:23.253936', 'end': '2026-03-18 05:05:23.305306', 'delta': '0:00:00.051370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:05:29.037495 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:05:23.861222', 'end': '2026-03-18 05:05:23.918204', 'delta': '0:00:00.056982', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:05:29.037515 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:05:24.444800', 'end': '2026-03-18 05:05:24.495085', 'delta': '0:00:00.050285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:05:29.037554 | orchestrator | 2026-03-18 05:05:29.037574 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:05:29.037592 | orchestrator | Wednesday 18 March 2026 05:05:26 +0000 (0:00:00.248) 0:21:57.874 ******* 2026-03-18 05:05:29.037609 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.037627 | orchestrator | 2026-03-18 05:05:29.037645 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:05:29.037664 | orchestrator | Wednesday 18 March 2026 05:05:26 +0000 (0:00:00.276) 0:21:58.150 ******* 2026-03-18 05:05:29.037682 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.037699 | orchestrator | 2026-03-18 05:05:29.037717 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:05:29.037736 | orchestrator | Wednesday 18 March 2026 05:05:26 +0000 (0:00:00.246) 0:21:58.397 ******* 2026-03-18 05:05:29.037754 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.037772 | orchestrator | 2026-03-18 05:05:29.037791 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:05:29.037809 | orchestrator | Wednesday 18 March 2026 05:05:26 +0000 (0:00:00.161) 0:21:58.559 ******* 2026-03-18 05:05:29.037828 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.037845 | orchestrator | 2026-03-18 05:05:29.037873 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:05:29.037892 | orchestrator | Wednesday 18 March 2026 05:05:28 +0000 (0:00:01.769) 0:22:00.328 ******* 2026-03-18 05:05:29.037911 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:29.037930 | orchestrator | 2026-03-18 05:05:29.037948 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:05:29.037965 | orchestrator | Wednesday 18 March 2026 05:05:28 +0000 (0:00:00.173) 0:22:00.502 ******* 2026-03-18 05:05:29.037982 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:29.038000 | orchestrator | 2026-03-18 05:05:29.038101 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:05:30.983419 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.146) 0:22:00.648 ******* 2026-03-18 05:05:30.983548 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983566 | orchestrator | 2026-03-18 05:05:30.983578 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:05:30.983590 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.235) 0:22:00.883 ******* 2026-03-18 05:05:30.983601 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983661 | orchestrator | 2026-03-18 05:05:30.983675 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:05:30.983686 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.154) 0:22:01.037 ******* 2026-03-18 05:05:30.983697 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983709 | orchestrator | 2026-03-18 05:05:30.983720 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:05:30.983731 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.166) 0:22:01.204 ******* 2026-03-18 05:05:30.983742 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983752 | orchestrator | 2026-03-18 05:05:30.983763 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:05:30.983774 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.160) 0:22:01.365 ******* 2026-03-18 05:05:30.983785 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983796 | orchestrator | 2026-03-18 05:05:30.983831 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:05:30.983842 | orchestrator | Wednesday 18 March 2026 05:05:29 +0000 (0:00:00.162) 0:22:01.527 ******* 2026-03-18 05:05:30.983853 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983863 | orchestrator | 2026-03-18 05:05:30.983874 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:05:30.983885 | orchestrator | Wednesday 18 March 2026 05:05:30 +0000 (0:00:00.175) 0:22:01.702 ******* 2026-03-18 05:05:30.983896 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983906 | orchestrator | 2026-03-18 05:05:30.983918 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:05:30.983932 | orchestrator | Wednesday 18 March 2026 05:05:30 +0000 (0:00:00.166) 0:22:01.868 ******* 2026-03-18 05:05:30.983945 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.983957 | orchestrator | 2026-03-18 05:05:30.983970 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:05:30.983983 | orchestrator | Wednesday 18 March 2026 05:05:30 +0000 (0:00:00.152) 0:22:02.021 ******* 2026-03-18 05:05:30.983998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:05:30.984067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:05:30.984148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:05:30.984199 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:30.984210 | orchestrator | 2026-03-18 05:05:30.984221 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:05:30.984232 | orchestrator | Wednesday 18 March 2026 05:05:30 +0000 (0:00:00.292) 0:22:02.314 ******* 2026-03-18 05:05:30.984251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.534910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535031 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535072 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535101 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535114 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535241 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd04444e1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1', 'scsi-SQEMU_QEMU_HARDDISK_d04444e1-2fbb-477e-b996-d330c703cca0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:05:32.535292 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:05:32.535305 | orchestrator | 2026-03-18 05:05:32.535317 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:05:32.535337 | orchestrator | Wednesday 18 March 2026 05:05:30 +0000 (0:00:00.280) 0:22:02.594 ******* 2026-03-18 05:05:32.535348 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:32.535360 | orchestrator | 2026-03-18 05:05:32.535371 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:05:32.535382 | orchestrator | Wednesday 18 March 2026 05:05:31 +0000 (0:00:00.875) 0:22:03.469 ******* 2026-03-18 05:05:32.535393 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:32.535404 | orchestrator | 2026-03-18 05:05:32.535415 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:05:32.535428 | orchestrator | Wednesday 18 March 2026 05:05:32 +0000 (0:00:00.163) 0:22:03.633 ******* 2026-03-18 05:05:32.535441 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:05:32.535453 | orchestrator | 2026-03-18 05:05:32.535466 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:05:32.535488 | orchestrator | Wednesday 18 March 2026 05:05:32 +0000 (0:00:00.510) 0:22:04.144 ******* 2026-03-18 05:06:01.774924 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:06:01.775039 | orchestrator | 2026-03-18 05:06:01.775055 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:06:01.775068 | orchestrator | Wednesday 18 March 2026 05:05:32 +0000 (0:00:00.154) 0:22:04.298 ******* 2026-03-18 05:06:01.775080 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:06:01.775091 | orchestrator | 2026-03-18 05:06:01.775102 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:06:01.775113 | orchestrator | Wednesday 18 March 2026 05:05:32 +0000 (0:00:00.278) 0:22:04.577 ******* 2026-03-18 05:06:01.775124 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:06:01.775134 | orchestrator | 2026-03-18 05:06:01.775145 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:06:01.775156 | orchestrator | Wednesday 18 March 2026 05:05:33 +0000 (0:00:00.164) 0:22:04.741 ******* 2026-03-18 05:06:01.775167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 05:06:01.775207 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-18 05:06:01.775221 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-18 05:06:01.775232 | orchestrator | 2026-03-18 05:06:01.775243 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:06:01.775254 | orchestrator | Wednesday 18 March 2026 05:05:33 +0000 (0:00:00.740) 0:22:05.482 ******* 2026-03-18 05:06:01.775265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-18 05:06:01.775277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-18 05:06:01.775287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-18 05:06:01.775298 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:06:01.775309 | orchestrator | 2026-03-18 05:06:01.775320 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:06:01.775330 | orchestrator | Wednesday 18 March 2026 05:05:34 +0000 (0:00:00.215) 0:22:05.697 ******* 2026-03-18 05:06:01.775341 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:06:01.775352 | orchestrator | 2026-03-18 05:06:01.775363 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:06:01.775374 | orchestrator | Wednesday 18 March 2026 05:05:34 +0000 (0:00:00.148) 0:22:05.845 ******* 2026-03-18 05:06:01.775384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 05:06:01.775395 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:01.775407 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:01.775418 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:01.775429 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:01.775439 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:06:01.775475 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:06:01.775489 | orchestrator | 2026-03-18 05:06:01.775502 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:06:01.775515 | orchestrator | Wednesday 18 March 2026 05:05:35 +0000 (0:00:01.276) 0:22:07.122 ******* 2026-03-18 05:06:01.775527 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-18 05:06:01.775540 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:01.775553 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:01.775566 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:01.775578 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:01.775591 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:06:01.775603 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:06:01.775615 | orchestrator | 2026-03-18 05:06:01.775628 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-18 05:06:01.775641 | orchestrator | Wednesday 18 March 2026 05:05:37 +0000 (0:00:01.763) 0:22:08.886 ******* 2026-03-18 05:06:01.775654 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:06:01.775667 | orchestrator | 2026-03-18 05:06:01.775680 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-18 05:06:01.775708 | orchestrator | Wednesday 18 March 2026 05:05:39 +0000 (0:00:02.169) 0:22:11.055 ******* 2026-03-18 05:06:01.775721 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:06:01.775734 | orchestrator | 2026-03-18 05:06:01.775747 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-18 05:06:01.775760 | orchestrator | Wednesday 18 March 2026 05:05:41 +0000 (0:00:02.360) 0:22:13.416 ******* 2026-03-18 05:06:01.775773 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:06:01.775785 | orchestrator | 2026-03-18 05:06:01.775797 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-18 05:06:01.775810 | orchestrator | Wednesday 18 March 2026 05:05:42 +0000 (0:00:01.112) 0:22:14.528 ******* 2026-03-18 05:06:01.775845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4661', 'value': {'gid': 4661, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 7, 'state': 'up:active', 'state_seq': 1263, 'addr': '192.168.16.15:6817/2358900897', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 2358900897}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 2358900897}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-18 05:06:01.775860 | orchestrator | 2026-03-18 05:06:01.775871 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-18 05:06:01.775882 | orchestrator | Wednesday 18 March 2026 05:05:43 +0000 (0:00:00.207) 0:22:14.736 ******* 2026-03-18 05:06:01.775893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-18 05:06:01.775903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-18 05:06:01.775914 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-03-18 05:06:01.775925 | orchestrator | 2026-03-18 05:06:01.775935 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-18 05:06:01.775946 | orchestrator | Wednesday 18 March 2026 05:05:43 +0000 (0:00:00.565) 0:22:15.301 ******* 2026-03-18 05:06:01.775965 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-18 05:06:01.775976 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-03-18 05:06:01.775987 | orchestrator | 2026-03-18 05:06:01.775998 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-18 05:06:01.776008 | orchestrator | Wednesday 18 March 2026 05:05:44 +0000 (0:00:00.541) 0:22:15.843 ******* 2026-03-18 05:06:01.776019 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:01.776030 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:01.776040 | orchestrator | 2026-03-18 05:06:01.776051 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-18 05:06:01.776062 | orchestrator | Wednesday 18 March 2026 05:05:54 +0000 (0:00:10.248) 0:22:26.092 ******* 2026-03-18 05:06:01.776072 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:01.776083 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:01.776093 | orchestrator | 2026-03-18 05:06:01.776104 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-18 05:06:01.776115 | orchestrator | Wednesday 18 March 2026 05:05:57 +0000 (0:00:02.756) 0:22:28.849 ******* 2026-03-18 05:06:01.776125 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:06:01.776136 | orchestrator | 2026-03-18 05:06:01.776147 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-18 05:06:01.776157 | orchestrator | Wednesday 18 March 2026 05:05:58 +0000 (0:00:01.193) 0:22:30.042 ******* 2026-03-18 05:06:01.776168 | orchestrator | changed: [testbed-node-0] 2026-03-18 05:06:01.776205 | orchestrator | 2026-03-18 05:06:01.776226 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-18 05:06:01.776247 | orchestrator | 2026-03-18 05:06:01.776266 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:06:01.776285 | orchestrator | Wednesday 18 March 2026 05:05:59 +0000 (0:00:00.857) 0:22:30.900 ******* 2026-03-18 05:06:01.776297 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-18 05:06:01.776307 | orchestrator | 2026-03-18 05:06:01.776318 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:06:01.776329 | orchestrator | Wednesday 18 March 2026 05:05:59 +0000 (0:00:00.314) 0:22:31.215 ******* 2026-03-18 05:06:01.776339 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776350 | orchestrator | 2026-03-18 05:06:01.776360 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:06:01.776371 | orchestrator | Wednesday 18 March 2026 05:06:00 +0000 (0:00:00.432) 0:22:31.647 ******* 2026-03-18 05:06:01.776381 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776392 | orchestrator | 2026-03-18 05:06:01.776402 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:06:01.776413 | orchestrator | Wednesday 18 March 2026 05:06:00 +0000 (0:00:00.501) 0:22:32.149 ******* 2026-03-18 05:06:01.776423 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776434 | orchestrator | 2026-03-18 05:06:01.776444 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:06:01.776461 | orchestrator | Wednesday 18 March 2026 05:06:00 +0000 (0:00:00.454) 0:22:32.604 ******* 2026-03-18 05:06:01.776472 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776482 | orchestrator | 2026-03-18 05:06:01.776493 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:06:01.776504 | orchestrator | Wednesday 18 March 2026 05:06:01 +0000 (0:00:00.155) 0:22:32.760 ******* 2026-03-18 05:06:01.776514 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776525 | orchestrator | 2026-03-18 05:06:01.776535 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:06:01.776546 | orchestrator | Wednesday 18 March 2026 05:06:01 +0000 (0:00:00.154) 0:22:32.915 ******* 2026-03-18 05:06:01.776564 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776574 | orchestrator | 2026-03-18 05:06:01.776585 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:06:01.776595 | orchestrator | Wednesday 18 March 2026 05:06:01 +0000 (0:00:00.151) 0:22:33.067 ******* 2026-03-18 05:06:01.776606 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:01.776617 | orchestrator | 2026-03-18 05:06:01.776627 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:06:01.776638 | orchestrator | Wednesday 18 March 2026 05:06:01 +0000 (0:00:00.168) 0:22:33.235 ******* 2026-03-18 05:06:01.776649 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:01.776660 | orchestrator | 2026-03-18 05:06:01.776678 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:06:09.911528 | orchestrator | Wednesday 18 March 2026 05:06:01 +0000 (0:00:00.144) 0:22:33.379 ******* 2026-03-18 05:06:09.911644 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:06:09.911661 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:09.911673 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:09.911685 | orchestrator | 2026-03-18 05:06:09.911697 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:06:09.911709 | orchestrator | Wednesday 18 March 2026 05:06:02 +0000 (0:00:00.757) 0:22:34.137 ******* 2026-03-18 05:06:09.911720 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:09.911732 | orchestrator | 2026-03-18 05:06:09.911743 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:06:09.911754 | orchestrator | Wednesday 18 March 2026 05:06:02 +0000 (0:00:00.294) 0:22:34.431 ******* 2026-03-18 05:06:09.911764 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:06:09.911775 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:09.911786 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:09.911797 | orchestrator | 2026-03-18 05:06:09.911808 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:06:09.911818 | orchestrator | Wednesday 18 March 2026 05:06:05 +0000 (0:00:02.228) 0:22:36.659 ******* 2026-03-18 05:06:09.911830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:06:09.911842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:06:09.911852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:06:09.911863 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.911875 | orchestrator | 2026-03-18 05:06:09.911886 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:06:09.911897 | orchestrator | Wednesday 18 March 2026 05:06:05 +0000 (0:00:00.470) 0:22:37.130 ******* 2026-03-18 05:06:09.911910 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.911924 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.911935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.911946 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.911957 | orchestrator | 2026-03-18 05:06:09.911968 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:06:09.911999 | orchestrator | Wednesday 18 March 2026 05:06:06 +0000 (0:00:00.952) 0:22:38.083 ******* 2026-03-18 05:06:09.912013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.912039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.912051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:09.912064 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912079 | orchestrator | 2026-03-18 05:06:09.912092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:06:09.912105 | orchestrator | Wednesday 18 March 2026 05:06:06 +0000 (0:00:00.182) 0:22:38.265 ******* 2026-03-18 05:06:09.912141 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:06:03.359982', 'end': '2026-03-18 05:06:03.399738', 'delta': '0:00:00.039756', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:06:09.912157 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:06:03.909561', 'end': '2026-03-18 05:06:03.960273', 'delta': '0:00:00.050712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:06:09.912170 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:06:04.829774', 'end': '2026-03-18 05:06:04.875369', 'delta': '0:00:00.045595', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:06:09.912222 | orchestrator | 2026-03-18 05:06:09.912237 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:06:09.912250 | orchestrator | Wednesday 18 March 2026 05:06:07 +0000 (0:00:00.524) 0:22:38.790 ******* 2026-03-18 05:06:09.912264 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:09.912277 | orchestrator | 2026-03-18 05:06:09.912290 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:06:09.912303 | orchestrator | Wednesday 18 March 2026 05:06:07 +0000 (0:00:00.291) 0:22:39.082 ******* 2026-03-18 05:06:09.912315 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912328 | orchestrator | 2026-03-18 05:06:09.912340 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:06:09.912353 | orchestrator | Wednesday 18 March 2026 05:06:07 +0000 (0:00:00.282) 0:22:39.365 ******* 2026-03-18 05:06:09.912366 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:09.912379 | orchestrator | 2026-03-18 05:06:09.912392 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:06:09.912405 | orchestrator | Wednesday 18 March 2026 05:06:07 +0000 (0:00:00.163) 0:22:39.529 ******* 2026-03-18 05:06:09.912418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:06:09.912428 | orchestrator | 2026-03-18 05:06:09.912439 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:06:09.912450 | orchestrator | Wednesday 18 March 2026 05:06:08 +0000 (0:00:00.971) 0:22:40.501 ******* 2026-03-18 05:06:09.912460 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:09.912471 | orchestrator | 2026-03-18 05:06:09.912487 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:06:09.912498 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.174) 0:22:40.675 ******* 2026-03-18 05:06:09.912508 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912519 | orchestrator | 2026-03-18 05:06:09.912530 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:06:09.912540 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.137) 0:22:40.813 ******* 2026-03-18 05:06:09.912551 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912562 | orchestrator | 2026-03-18 05:06:09.912572 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:06:09.912583 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.221) 0:22:41.035 ******* 2026-03-18 05:06:09.912593 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912604 | orchestrator | 2026-03-18 05:06:09.912615 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:06:09.912625 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.167) 0:22:41.202 ******* 2026-03-18 05:06:09.912636 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:09.912646 | orchestrator | 2026-03-18 05:06:09.912657 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:06:09.912668 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.144) 0:22:41.347 ******* 2026-03-18 05:06:09.912686 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:11.086298 | orchestrator | 2026-03-18 05:06:11.086395 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:06:11.086410 | orchestrator | Wednesday 18 March 2026 05:06:09 +0000 (0:00:00.176) 0:22:41.523 ******* 2026-03-18 05:06:11.086420 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:11.086431 | orchestrator | 2026-03-18 05:06:11.086439 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:06:11.086447 | orchestrator | Wednesday 18 March 2026 05:06:10 +0000 (0:00:00.138) 0:22:41.662 ******* 2026-03-18 05:06:11.086455 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:11.086464 | orchestrator | 2026-03-18 05:06:11.086472 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:06:11.086480 | orchestrator | Wednesday 18 March 2026 05:06:10 +0000 (0:00:00.176) 0:22:41.839 ******* 2026-03-18 05:06:11.086511 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:11.086519 | orchestrator | 2026-03-18 05:06:11.086527 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:06:11.086535 | orchestrator | Wednesday 18 March 2026 05:06:10 +0000 (0:00:00.125) 0:22:41.964 ******* 2026-03-18 05:06:11.086542 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:11.086551 | orchestrator | 2026-03-18 05:06:11.086559 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:06:11.086567 | orchestrator | Wednesday 18 March 2026 05:06:10 +0000 (0:00:00.506) 0:22:42.470 ******* 2026-03-18 05:06:11.086578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}})  2026-03-18 05:06:11.086602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:06:11.086627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}})  2026-03-18 05:06:11.086636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:06:11.086692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.086716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}})  2026-03-18 05:06:11.086730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}})  2026-03-18 05:06:11.086746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.432064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:06:11.432172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.432264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:06:11.432281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:06:11.432295 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:11.432308 | orchestrator | 2026-03-18 05:06:11.432320 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:06:11.432352 | orchestrator | Wednesday 18 March 2026 05:06:11 +0000 (0:00:00.344) 0:22:42.814 ******* 2026-03-18 05:06:11.432385 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.432399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.432411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.432429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.432442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.432473 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.617989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.618007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.618076 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:11.618097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:22.213783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:06:22.213902 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.213920 | orchestrator | 2026-03-18 05:06:22.213934 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:06:22.213947 | orchestrator | Wednesday 18 March 2026 05:06:11 +0000 (0:00:00.412) 0:22:43.227 ******* 2026-03-18 05:06:22.213958 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.213970 | orchestrator | 2026-03-18 05:06:22.213981 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:06:22.213992 | orchestrator | Wednesday 18 March 2026 05:06:12 +0000 (0:00:00.491) 0:22:43.718 ******* 2026-03-18 05:06:22.214003 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.214072 | orchestrator | 2026-03-18 05:06:22.214087 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:06:22.214098 | orchestrator | Wednesday 18 March 2026 05:06:12 +0000 (0:00:00.136) 0:22:43.855 ******* 2026-03-18 05:06:22.214109 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.214120 | orchestrator | 2026-03-18 05:06:22.214131 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:06:22.214142 | orchestrator | Wednesday 18 March 2026 05:06:12 +0000 (0:00:00.477) 0:22:44.333 ******* 2026-03-18 05:06:22.214153 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214164 | orchestrator | 2026-03-18 05:06:22.214175 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:06:22.214186 | orchestrator | Wednesday 18 March 2026 05:06:12 +0000 (0:00:00.144) 0:22:44.477 ******* 2026-03-18 05:06:22.214257 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214269 | orchestrator | 2026-03-18 05:06:22.214306 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:06:22.214321 | orchestrator | Wednesday 18 March 2026 05:06:13 +0000 (0:00:00.257) 0:22:44.735 ******* 2026-03-18 05:06:22.214334 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214346 | orchestrator | 2026-03-18 05:06:22.214359 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:06:22.214372 | orchestrator | Wednesday 18 March 2026 05:06:13 +0000 (0:00:00.163) 0:22:44.898 ******* 2026-03-18 05:06:22.214384 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 05:06:22.214397 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 05:06:22.214411 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 05:06:22.214424 | orchestrator | 2026-03-18 05:06:22.214436 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:06:22.214449 | orchestrator | Wednesday 18 March 2026 05:06:14 +0000 (0:00:01.086) 0:22:45.985 ******* 2026-03-18 05:06:22.214462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:06:22.214475 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:06:22.214487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:06:22.214499 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214512 | orchestrator | 2026-03-18 05:06:22.214524 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:06:22.214537 | orchestrator | Wednesday 18 March 2026 05:06:14 +0000 (0:00:00.170) 0:22:46.155 ******* 2026-03-18 05:06:22.214549 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-18 05:06:22.214563 | orchestrator | 2026-03-18 05:06:22.214577 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:06:22.214592 | orchestrator | Wednesday 18 March 2026 05:06:14 +0000 (0:00:00.238) 0:22:46.394 ******* 2026-03-18 05:06:22.214604 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214616 | orchestrator | 2026-03-18 05:06:22.214629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:06:22.214641 | orchestrator | Wednesday 18 March 2026 05:06:15 +0000 (0:00:00.479) 0:22:46.873 ******* 2026-03-18 05:06:22.214654 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214667 | orchestrator | 2026-03-18 05:06:22.214678 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:06:22.214736 | orchestrator | Wednesday 18 March 2026 05:06:15 +0000 (0:00:00.169) 0:22:47.043 ******* 2026-03-18 05:06:22.214748 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214759 | orchestrator | 2026-03-18 05:06:22.214770 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:06:22.214781 | orchestrator | Wednesday 18 March 2026 05:06:15 +0000 (0:00:00.163) 0:22:47.206 ******* 2026-03-18 05:06:22.214792 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.214802 | orchestrator | 2026-03-18 05:06:22.214813 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:06:22.214824 | orchestrator | Wednesday 18 March 2026 05:06:15 +0000 (0:00:00.243) 0:22:47.449 ******* 2026-03-18 05:06:22.214835 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:06:22.214863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:06:22.214875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:06:22.214885 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214896 | orchestrator | 2026-03-18 05:06:22.214907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:06:22.214918 | orchestrator | Wednesday 18 March 2026 05:06:16 +0000 (0:00:00.410) 0:22:47.860 ******* 2026-03-18 05:06:22.214929 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:06:22.214939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:06:22.214958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:06:22.214969 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.214980 | orchestrator | 2026-03-18 05:06:22.214991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:06:22.215001 | orchestrator | Wednesday 18 March 2026 05:06:16 +0000 (0:00:00.395) 0:22:48.255 ******* 2026-03-18 05:06:22.215012 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:06:22.215023 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:06:22.215033 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:06:22.215044 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.215055 | orchestrator | 2026-03-18 05:06:22.215066 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:06:22.215076 | orchestrator | Wednesday 18 March 2026 05:06:17 +0000 (0:00:00.411) 0:22:48.667 ******* 2026-03-18 05:06:22.215087 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.215098 | orchestrator | 2026-03-18 05:06:22.215109 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:06:22.215120 | orchestrator | Wednesday 18 March 2026 05:06:17 +0000 (0:00:00.171) 0:22:48.838 ******* 2026-03-18 05:06:22.215130 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:06:22.215141 | orchestrator | 2026-03-18 05:06:22.215152 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:06:22.215163 | orchestrator | Wednesday 18 March 2026 05:06:17 +0000 (0:00:00.381) 0:22:49.220 ******* 2026-03-18 05:06:22.215173 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:06:22.215184 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:22.215216 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:22.215228 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:22.215239 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:22.215249 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:06:22.215265 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:06:22.215284 | orchestrator | 2026-03-18 05:06:22.215303 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:06:22.215322 | orchestrator | Wednesday 18 March 2026 05:06:18 +0000 (0:00:01.194) 0:22:50.415 ******* 2026-03-18 05:06:22.215347 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:06:22.215367 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:06:22.215386 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:06:22.215404 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:06:22.215423 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:06:22.215434 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:06:22.215445 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:06:22.215456 | orchestrator | 2026-03-18 05:06:22.215466 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-18 05:06:22.215477 | orchestrator | Wednesday 18 March 2026 05:06:20 +0000 (0:00:01.828) 0:22:52.243 ******* 2026-03-18 05:06:22.215487 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.215498 | orchestrator | 2026-03-18 05:06:22.215509 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:06:22.215519 | orchestrator | Wednesday 18 March 2026 05:06:20 +0000 (0:00:00.140) 0:22:52.384 ******* 2026-03-18 05:06:22.215530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-18 05:06:22.215549 | orchestrator | 2026-03-18 05:06:22.215560 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:06:22.215571 | orchestrator | Wednesday 18 March 2026 05:06:21 +0000 (0:00:00.532) 0:22:52.916 ******* 2026-03-18 05:06:22.215581 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-18 05:06:22.215592 | orchestrator | 2026-03-18 05:06:22.215603 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:06:22.215614 | orchestrator | Wednesday 18 March 2026 05:06:21 +0000 (0:00:00.242) 0:22:53.158 ******* 2026-03-18 05:06:22.215624 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:22.215635 | orchestrator | 2026-03-18 05:06:22.215646 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:06:22.215656 | orchestrator | Wednesday 18 March 2026 05:06:21 +0000 (0:00:00.135) 0:22:53.294 ******* 2026-03-18 05:06:22.215667 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:22.215677 | orchestrator | 2026-03-18 05:06:22.215688 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:06:22.215707 | orchestrator | Wednesday 18 March 2026 05:06:22 +0000 (0:00:00.520) 0:22:53.814 ******* 2026-03-18 05:06:33.750459 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.750576 | orchestrator | 2026-03-18 05:06:33.750594 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:06:33.750607 | orchestrator | Wednesday 18 March 2026 05:06:22 +0000 (0:00:00.527) 0:22:54.341 ******* 2026-03-18 05:06:33.750619 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.750630 | orchestrator | 2026-03-18 05:06:33.750641 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:06:33.750652 | orchestrator | Wednesday 18 March 2026 05:06:23 +0000 (0:00:00.557) 0:22:54.899 ******* 2026-03-18 05:06:33.750663 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.750675 | orchestrator | 2026-03-18 05:06:33.750686 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:06:33.750697 | orchestrator | Wednesday 18 March 2026 05:06:23 +0000 (0:00:00.141) 0:22:55.040 ******* 2026-03-18 05:06:33.750708 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.750719 | orchestrator | 2026-03-18 05:06:33.750729 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:06:33.750740 | orchestrator | Wednesday 18 March 2026 05:06:23 +0000 (0:00:00.144) 0:22:55.185 ******* 2026-03-18 05:06:33.750751 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.750762 | orchestrator | 2026-03-18 05:06:33.750773 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:06:33.750784 | orchestrator | Wednesday 18 March 2026 05:06:23 +0000 (0:00:00.136) 0:22:55.321 ******* 2026-03-18 05:06:33.750795 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.750806 | orchestrator | 2026-03-18 05:06:33.750816 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:06:33.750827 | orchestrator | Wednesday 18 March 2026 05:06:24 +0000 (0:00:00.535) 0:22:55.857 ******* 2026-03-18 05:06:33.750838 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.750849 | orchestrator | 2026-03-18 05:06:33.750860 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:06:33.750871 | orchestrator | Wednesday 18 March 2026 05:06:24 +0000 (0:00:00.569) 0:22:56.426 ******* 2026-03-18 05:06:33.750881 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.750892 | orchestrator | 2026-03-18 05:06:33.750903 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:06:33.750914 | orchestrator | Wednesday 18 March 2026 05:06:25 +0000 (0:00:00.432) 0:22:56.859 ******* 2026-03-18 05:06:33.750925 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.750936 | orchestrator | 2026-03-18 05:06:33.750946 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:06:33.750982 | orchestrator | Wednesday 18 March 2026 05:06:25 +0000 (0:00:00.147) 0:22:57.006 ******* 2026-03-18 05:06:33.750997 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.751011 | orchestrator | 2026-03-18 05:06:33.751023 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:06:33.751036 | orchestrator | Wednesday 18 March 2026 05:06:25 +0000 (0:00:00.162) 0:22:57.169 ******* 2026-03-18 05:06:33.751049 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.751062 | orchestrator | 2026-03-18 05:06:33.751075 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:06:33.751087 | orchestrator | Wednesday 18 March 2026 05:06:25 +0000 (0:00:00.165) 0:22:57.335 ******* 2026-03-18 05:06:33.751100 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.751113 | orchestrator | 2026-03-18 05:06:33.751140 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:06:33.751153 | orchestrator | Wednesday 18 March 2026 05:06:25 +0000 (0:00:00.159) 0:22:57.494 ******* 2026-03-18 05:06:33.751166 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751179 | orchestrator | 2026-03-18 05:06:33.751191 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:06:33.751227 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.138) 0:22:57.633 ******* 2026-03-18 05:06:33.751241 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751253 | orchestrator | 2026-03-18 05:06:33.751266 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:06:33.751292 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.143) 0:22:57.776 ******* 2026-03-18 05:06:33.751304 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751317 | orchestrator | 2026-03-18 05:06:33.751329 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:06:33.751352 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.142) 0:22:57.918 ******* 2026-03-18 05:06:33.751363 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.751374 | orchestrator | 2026-03-18 05:06:33.751385 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:06:33.751396 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.148) 0:22:58.067 ******* 2026-03-18 05:06:33.751407 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.751417 | orchestrator | 2026-03-18 05:06:33.751428 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:06:33.751439 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.264) 0:22:58.332 ******* 2026-03-18 05:06:33.751451 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751461 | orchestrator | 2026-03-18 05:06:33.751472 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:06:33.751483 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.132) 0:22:58.465 ******* 2026-03-18 05:06:33.751494 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751505 | orchestrator | 2026-03-18 05:06:33.751516 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:06:33.751527 | orchestrator | Wednesday 18 March 2026 05:06:26 +0000 (0:00:00.140) 0:22:58.605 ******* 2026-03-18 05:06:33.751537 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751548 | orchestrator | 2026-03-18 05:06:33.751559 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:06:33.751570 | orchestrator | Wednesday 18 March 2026 05:06:27 +0000 (0:00:00.440) 0:22:59.045 ******* 2026-03-18 05:06:33.751581 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751592 | orchestrator | 2026-03-18 05:06:33.751603 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:06:33.751630 | orchestrator | Wednesday 18 March 2026 05:06:27 +0000 (0:00:00.136) 0:22:59.182 ******* 2026-03-18 05:06:33.751642 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751653 | orchestrator | 2026-03-18 05:06:33.751664 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:06:33.751683 | orchestrator | Wednesday 18 March 2026 05:06:27 +0000 (0:00:00.142) 0:22:59.325 ******* 2026-03-18 05:06:33.751694 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751705 | orchestrator | 2026-03-18 05:06:33.751715 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:06:33.751726 | orchestrator | Wednesday 18 March 2026 05:06:27 +0000 (0:00:00.135) 0:22:59.461 ******* 2026-03-18 05:06:33.751737 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751748 | orchestrator | 2026-03-18 05:06:33.751759 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:06:33.751771 | orchestrator | Wednesday 18 March 2026 05:06:27 +0000 (0:00:00.145) 0:22:59.606 ******* 2026-03-18 05:06:33.751782 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751792 | orchestrator | 2026-03-18 05:06:33.751803 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:06:33.751814 | orchestrator | Wednesday 18 March 2026 05:06:28 +0000 (0:00:00.144) 0:22:59.751 ******* 2026-03-18 05:06:33.751825 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751835 | orchestrator | 2026-03-18 05:06:33.751846 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:06:33.751857 | orchestrator | Wednesday 18 March 2026 05:06:28 +0000 (0:00:00.132) 0:22:59.884 ******* 2026-03-18 05:06:33.751868 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751878 | orchestrator | 2026-03-18 05:06:33.751889 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:06:33.751900 | orchestrator | Wednesday 18 March 2026 05:06:28 +0000 (0:00:00.132) 0:23:00.016 ******* 2026-03-18 05:06:33.751911 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751921 | orchestrator | 2026-03-18 05:06:33.751932 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:06:33.751943 | orchestrator | Wednesday 18 March 2026 05:06:28 +0000 (0:00:00.139) 0:23:00.156 ******* 2026-03-18 05:06:33.751954 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.751965 | orchestrator | 2026-03-18 05:06:33.751975 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:06:33.751986 | orchestrator | Wednesday 18 March 2026 05:06:28 +0000 (0:00:00.201) 0:23:00.357 ******* 2026-03-18 05:06:33.751996 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.752007 | orchestrator | 2026-03-18 05:06:33.752018 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:06:33.752029 | orchestrator | Wednesday 18 March 2026 05:06:29 +0000 (0:00:00.920) 0:23:01.277 ******* 2026-03-18 05:06:33.752040 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.752051 | orchestrator | 2026-03-18 05:06:33.752061 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:06:33.752072 | orchestrator | Wednesday 18 March 2026 05:06:30 +0000 (0:00:01.288) 0:23:02.566 ******* 2026-03-18 05:06:33.752083 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-18 05:06:33.752095 | orchestrator | 2026-03-18 05:06:33.752111 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:06:33.752122 | orchestrator | Wednesday 18 March 2026 05:06:31 +0000 (0:00:00.498) 0:23:03.065 ******* 2026-03-18 05:06:33.752133 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.752144 | orchestrator | 2026-03-18 05:06:33.752154 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:06:33.752165 | orchestrator | Wednesday 18 March 2026 05:06:31 +0000 (0:00:00.157) 0:23:03.222 ******* 2026-03-18 05:06:33.752176 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.752187 | orchestrator | 2026-03-18 05:06:33.752197 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:06:33.752229 | orchestrator | Wednesday 18 March 2026 05:06:31 +0000 (0:00:00.153) 0:23:03.376 ******* 2026-03-18 05:06:33.752240 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:06:33.752257 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:06:33.752268 | orchestrator | 2026-03-18 05:06:33.752279 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:06:33.752290 | orchestrator | Wednesday 18 March 2026 05:06:32 +0000 (0:00:00.796) 0:23:04.172 ******* 2026-03-18 05:06:33.752300 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:33.752311 | orchestrator | 2026-03-18 05:06:33.752322 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:06:33.752333 | orchestrator | Wednesday 18 March 2026 05:06:33 +0000 (0:00:00.474) 0:23:04.647 ******* 2026-03-18 05:06:33.752344 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.752354 | orchestrator | 2026-03-18 05:06:33.752365 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:06:33.752376 | orchestrator | Wednesday 18 March 2026 05:06:33 +0000 (0:00:00.171) 0:23:04.818 ******* 2026-03-18 05:06:33.752387 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.752397 | orchestrator | 2026-03-18 05:06:33.752408 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:06:33.752419 | orchestrator | Wednesday 18 March 2026 05:06:33 +0000 (0:00:00.150) 0:23:04.969 ******* 2026-03-18 05:06:33.752430 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:33.752440 | orchestrator | 2026-03-18 05:06:33.752451 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:06:33.752462 | orchestrator | Wednesday 18 March 2026 05:06:33 +0000 (0:00:00.148) 0:23:05.117 ******* 2026-03-18 05:06:33.752473 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-18 05:06:33.752483 | orchestrator | 2026-03-18 05:06:33.752494 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:06:33.752511 | orchestrator | Wednesday 18 March 2026 05:06:33 +0000 (0:00:00.235) 0:23:05.352 ******* 2026-03-18 05:06:48.485640 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:48.485759 | orchestrator | 2026-03-18 05:06:48.485776 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:06:48.485790 | orchestrator | Wednesday 18 March 2026 05:06:34 +0000 (0:00:00.715) 0:23:06.068 ******* 2026-03-18 05:06:48.485833 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:06:48.485851 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:06:48.485871 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:06:48.485889 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.485908 | orchestrator | 2026-03-18 05:06:48.485924 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:06:48.485941 | orchestrator | Wednesday 18 March 2026 05:06:34 +0000 (0:00:00.149) 0:23:06.217 ******* 2026-03-18 05:06:48.485959 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.485977 | orchestrator | 2026-03-18 05:06:48.485995 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:06:48.486081 | orchestrator | Wednesday 18 March 2026 05:06:34 +0000 (0:00:00.145) 0:23:06.362 ******* 2026-03-18 05:06:48.486111 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486131 | orchestrator | 2026-03-18 05:06:48.486150 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:06:48.486162 | orchestrator | Wednesday 18 March 2026 05:06:35 +0000 (0:00:00.502) 0:23:06.865 ******* 2026-03-18 05:06:48.486173 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486186 | orchestrator | 2026-03-18 05:06:48.486199 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:06:48.486212 | orchestrator | Wednesday 18 March 2026 05:06:35 +0000 (0:00:00.168) 0:23:07.034 ******* 2026-03-18 05:06:48.486254 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486267 | orchestrator | 2026-03-18 05:06:48.486280 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:06:48.486319 | orchestrator | Wednesday 18 March 2026 05:06:35 +0000 (0:00:00.148) 0:23:07.183 ******* 2026-03-18 05:06:48.486333 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486346 | orchestrator | 2026-03-18 05:06:48.486359 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:06:48.486372 | orchestrator | Wednesday 18 March 2026 05:06:35 +0000 (0:00:00.171) 0:23:07.354 ******* 2026-03-18 05:06:48.486391 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:48.486410 | orchestrator | 2026-03-18 05:06:48.486437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:06:48.486458 | orchestrator | Wednesday 18 March 2026 05:06:37 +0000 (0:00:01.459) 0:23:08.814 ******* 2026-03-18 05:06:48.486476 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:48.486494 | orchestrator | 2026-03-18 05:06:48.486512 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:06:48.486531 | orchestrator | Wednesday 18 March 2026 05:06:37 +0000 (0:00:00.167) 0:23:08.982 ******* 2026-03-18 05:06:48.486549 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-18 05:06:48.486566 | orchestrator | 2026-03-18 05:06:48.486605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:06:48.486625 | orchestrator | Wednesday 18 March 2026 05:06:37 +0000 (0:00:00.221) 0:23:09.203 ******* 2026-03-18 05:06:48.486643 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486662 | orchestrator | 2026-03-18 05:06:48.486679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:06:48.486695 | orchestrator | Wednesday 18 March 2026 05:06:37 +0000 (0:00:00.156) 0:23:09.360 ******* 2026-03-18 05:06:48.486706 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486716 | orchestrator | 2026-03-18 05:06:48.486727 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:06:48.486738 | orchestrator | Wednesday 18 March 2026 05:06:37 +0000 (0:00:00.161) 0:23:09.522 ******* 2026-03-18 05:06:48.486748 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486765 | orchestrator | 2026-03-18 05:06:48.486784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:06:48.486801 | orchestrator | Wednesday 18 March 2026 05:06:38 +0000 (0:00:00.158) 0:23:09.680 ******* 2026-03-18 05:06:48.486818 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486836 | orchestrator | 2026-03-18 05:06:48.486853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:06:48.486870 | orchestrator | Wednesday 18 March 2026 05:06:38 +0000 (0:00:00.154) 0:23:09.834 ******* 2026-03-18 05:06:48.486888 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.486906 | orchestrator | 2026-03-18 05:06:48.487018 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:06:48.487037 | orchestrator | Wednesday 18 March 2026 05:06:38 +0000 (0:00:00.145) 0:23:09.980 ******* 2026-03-18 05:06:48.487057 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487074 | orchestrator | 2026-03-18 05:06:48.487093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:06:48.487105 | orchestrator | Wednesday 18 March 2026 05:06:38 +0000 (0:00:00.443) 0:23:10.424 ******* 2026-03-18 05:06:48.487115 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487126 | orchestrator | 2026-03-18 05:06:48.487137 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:06:48.487148 | orchestrator | Wednesday 18 March 2026 05:06:38 +0000 (0:00:00.154) 0:23:10.579 ******* 2026-03-18 05:06:48.487158 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487169 | orchestrator | 2026-03-18 05:06:48.487179 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:06:48.487190 | orchestrator | Wednesday 18 March 2026 05:06:39 +0000 (0:00:00.174) 0:23:10.753 ******* 2026-03-18 05:06:48.487201 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:06:48.487235 | orchestrator | 2026-03-18 05:06:48.487263 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:06:48.487297 | orchestrator | Wednesday 18 March 2026 05:06:39 +0000 (0:00:00.231) 0:23:10.985 ******* 2026-03-18 05:06:48.487309 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-18 05:06:48.487320 | orchestrator | 2026-03-18 05:06:48.487331 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:06:48.487342 | orchestrator | Wednesday 18 March 2026 05:06:39 +0000 (0:00:00.233) 0:23:11.218 ******* 2026-03-18 05:06:48.487353 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-18 05:06:48.487365 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-18 05:06:48.487375 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-18 05:06:48.487386 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-18 05:06:48.487397 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-18 05:06:48.487407 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-18 05:06:48.487418 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-18 05:06:48.487429 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:06:48.487439 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:06:48.487450 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:06:48.487461 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:06:48.487472 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:06:48.487483 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:06:48.487493 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:06:48.487504 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-18 05:06:48.487515 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-18 05:06:48.487526 | orchestrator | 2026-03-18 05:06:48.487537 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:06:48.487548 | orchestrator | Wednesday 18 March 2026 05:06:45 +0000 (0:00:05.532) 0:23:16.751 ******* 2026-03-18 05:06:48.487558 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-18 05:06:48.487569 | orchestrator | 2026-03-18 05:06:48.487580 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:06:48.487591 | orchestrator | Wednesday 18 March 2026 05:06:45 +0000 (0:00:00.218) 0:23:16.969 ******* 2026-03-18 05:06:48.487602 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:06:48.487615 | orchestrator | 2026-03-18 05:06:48.487625 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:06:48.487636 | orchestrator | Wednesday 18 March 2026 05:06:45 +0000 (0:00:00.512) 0:23:17.482 ******* 2026-03-18 05:06:48.487647 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:06:48.487658 | orchestrator | 2026-03-18 05:06:48.487676 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:06:48.487687 | orchestrator | Wednesday 18 March 2026 05:06:46 +0000 (0:00:01.002) 0:23:18.484 ******* 2026-03-18 05:06:48.487698 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487709 | orchestrator | 2026-03-18 05:06:48.487719 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:06:48.487730 | orchestrator | Wednesday 18 March 2026 05:06:47 +0000 (0:00:00.154) 0:23:18.638 ******* 2026-03-18 05:06:48.487741 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487752 | orchestrator | 2026-03-18 05:06:48.487763 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:06:48.487773 | orchestrator | Wednesday 18 March 2026 05:06:47 +0000 (0:00:00.458) 0:23:19.097 ******* 2026-03-18 05:06:48.487792 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487803 | orchestrator | 2026-03-18 05:06:48.487814 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:06:48.487825 | orchestrator | Wednesday 18 March 2026 05:06:47 +0000 (0:00:00.133) 0:23:19.230 ******* 2026-03-18 05:06:48.487835 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487846 | orchestrator | 2026-03-18 05:06:48.487857 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:06:48.487868 | orchestrator | Wednesday 18 March 2026 05:06:47 +0000 (0:00:00.129) 0:23:19.360 ******* 2026-03-18 05:06:48.487879 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487889 | orchestrator | 2026-03-18 05:06:48.487900 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:06:48.487911 | orchestrator | Wednesday 18 March 2026 05:06:47 +0000 (0:00:00.146) 0:23:19.507 ******* 2026-03-18 05:06:48.487922 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487933 | orchestrator | 2026-03-18 05:06:48.487944 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:06:48.487955 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.146) 0:23:19.654 ******* 2026-03-18 05:06:48.487966 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.487976 | orchestrator | 2026-03-18 05:06:48.487987 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:06:48.487998 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.137) 0:23:19.791 ******* 2026-03-18 05:06:48.488009 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.488019 | orchestrator | 2026-03-18 05:06:48.488030 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:06:48.488041 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.140) 0:23:19.932 ******* 2026-03-18 05:06:48.488052 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:06:48.488063 | orchestrator | 2026-03-18 05:06:48.488081 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:07:13.799940 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.155) 0:23:20.088 ******* 2026-03-18 05:07:13.800041 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800053 | orchestrator | 2026-03-18 05:07:13.800062 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:07:13.800068 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.149) 0:23:20.237 ******* 2026-03-18 05:07:13.800075 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800082 | orchestrator | 2026-03-18 05:07:13.800088 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:07:13.800095 | orchestrator | Wednesday 18 March 2026 05:06:48 +0000 (0:00:00.210) 0:23:20.448 ******* 2026-03-18 05:07:13.800102 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:07:13.800110 | orchestrator | 2026-03-18 05:07:13.800117 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:07:13.800123 | orchestrator | Wednesday 18 March 2026 05:06:52 +0000 (0:00:03.521) 0:23:23.969 ******* 2026-03-18 05:07:13.800130 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:07:13.800139 | orchestrator | 2026-03-18 05:07:13.800146 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:07:13.800152 | orchestrator | Wednesday 18 March 2026 05:06:52 +0000 (0:00:00.188) 0:23:24.158 ******* 2026-03-18 05:07:13.800160 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-18 05:07:13.800191 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-18 05:07:13.800200 | orchestrator | 2026-03-18 05:07:13.800207 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:07:13.800214 | orchestrator | Wednesday 18 March 2026 05:06:56 +0000 (0:00:04.244) 0:23:28.402 ******* 2026-03-18 05:07:13.800221 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800228 | orchestrator | 2026-03-18 05:07:13.800280 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:07:13.800288 | orchestrator | Wednesday 18 March 2026 05:06:57 +0000 (0:00:00.473) 0:23:28.876 ******* 2026-03-18 05:07:13.800294 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800301 | orchestrator | 2026-03-18 05:07:13.800321 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:07:13.800329 | orchestrator | Wednesday 18 March 2026 05:06:57 +0000 (0:00:00.158) 0:23:29.034 ******* 2026-03-18 05:07:13.800336 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800342 | orchestrator | 2026-03-18 05:07:13.800348 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:07:13.800355 | orchestrator | Wednesday 18 March 2026 05:06:57 +0000 (0:00:00.175) 0:23:29.209 ******* 2026-03-18 05:07:13.800361 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800368 | orchestrator | 2026-03-18 05:07:13.800374 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:07:13.800381 | orchestrator | Wednesday 18 March 2026 05:06:57 +0000 (0:00:00.168) 0:23:29.378 ******* 2026-03-18 05:07:13.800388 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800394 | orchestrator | 2026-03-18 05:07:13.800401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:07:13.800408 | orchestrator | Wednesday 18 March 2026 05:06:57 +0000 (0:00:00.161) 0:23:29.540 ******* 2026-03-18 05:07:13.800415 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800422 | orchestrator | 2026-03-18 05:07:13.800429 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:07:13.800435 | orchestrator | Wednesday 18 March 2026 05:06:58 +0000 (0:00:00.294) 0:23:29.834 ******* 2026-03-18 05:07:13.800442 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:07:13.800449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:07:13.800455 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:07:13.800462 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800467 | orchestrator | 2026-03-18 05:07:13.800474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:07:13.800481 | orchestrator | Wednesday 18 March 2026 05:06:58 +0000 (0:00:00.427) 0:23:30.262 ******* 2026-03-18 05:07:13.800488 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:07:13.800495 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:07:13.800502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:07:13.800509 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800515 | orchestrator | 2026-03-18 05:07:13.800522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:07:13.800529 | orchestrator | Wednesday 18 March 2026 05:06:59 +0000 (0:00:00.415) 0:23:30.678 ******* 2026-03-18 05:07:13.800535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:07:13.800541 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:07:13.800548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:07:13.800574 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800582 | orchestrator | 2026-03-18 05:07:13.800589 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:07:13.800597 | orchestrator | Wednesday 18 March 2026 05:06:59 +0000 (0:00:00.447) 0:23:31.125 ******* 2026-03-18 05:07:13.800603 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800610 | orchestrator | 2026-03-18 05:07:13.800617 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:07:13.800624 | orchestrator | Wednesday 18 March 2026 05:06:59 +0000 (0:00:00.242) 0:23:31.368 ******* 2026-03-18 05:07:13.800631 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:07:13.800638 | orchestrator | 2026-03-18 05:07:13.800645 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:07:13.800652 | orchestrator | Wednesday 18 March 2026 05:07:00 +0000 (0:00:00.438) 0:23:31.806 ******* 2026-03-18 05:07:13.800659 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800667 | orchestrator | 2026-03-18 05:07:13.800674 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-18 05:07:13.800680 | orchestrator | Wednesday 18 March 2026 05:07:01 +0000 (0:00:00.843) 0:23:32.650 ******* 2026-03-18 05:07:13.800687 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800694 | orchestrator | 2026-03-18 05:07:13.800701 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-18 05:07:13.800709 | orchestrator | Wednesday 18 March 2026 05:07:01 +0000 (0:00:00.442) 0:23:33.092 ******* 2026-03-18 05:07:13.800717 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-03-18 05:07:13.800724 | orchestrator | 2026-03-18 05:07:13.800731 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-18 05:07:13.800738 | orchestrator | Wednesday 18 March 2026 05:07:02 +0000 (0:00:00.643) 0:23:33.736 ******* 2026-03-18 05:07:13.800745 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 05:07:13.800752 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-18 05:07:13.800759 | orchestrator | 2026-03-18 05:07:13.800765 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-18 05:07:13.800771 | orchestrator | Wednesday 18 March 2026 05:07:02 +0000 (0:00:00.846) 0:23:34.582 ******* 2026-03-18 05:07:13.800777 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:07:13.800783 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:07:13.800790 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:07:13.800796 | orchestrator | 2026-03-18 05:07:13.800802 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:07:13.800808 | orchestrator | Wednesday 18 March 2026 05:07:05 +0000 (0:00:02.243) 0:23:36.826 ******* 2026-03-18 05:07:13.800814 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-18 05:07:13.800821 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:07:13.800828 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800836 | orchestrator | 2026-03-18 05:07:13.800849 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-18 05:07:13.800855 | orchestrator | Wednesday 18 March 2026 05:07:06 +0000 (0:00:00.925) 0:23:37.751 ******* 2026-03-18 05:07:13.800861 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800867 | orchestrator | 2026-03-18 05:07:13.800874 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-18 05:07:13.800880 | orchestrator | Wednesday 18 March 2026 05:07:06 +0000 (0:00:00.588) 0:23:38.340 ******* 2026-03-18 05:07:13.800887 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:13.800893 | orchestrator | 2026-03-18 05:07:13.800900 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-18 05:07:13.800907 | orchestrator | Wednesday 18 March 2026 05:07:06 +0000 (0:00:00.155) 0:23:38.496 ******* 2026-03-18 05:07:13.800914 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-03-18 05:07:13.800927 | orchestrator | 2026-03-18 05:07:13.800933 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-18 05:07:13.800940 | orchestrator | Wednesday 18 March 2026 05:07:07 +0000 (0:00:00.598) 0:23:39.095 ******* 2026-03-18 05:07:13.800947 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-03-18 05:07:13.800953 | orchestrator | 2026-03-18 05:07:13.800959 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-18 05:07:13.800965 | orchestrator | Wednesday 18 March 2026 05:07:08 +0000 (0:00:00.565) 0:23:39.660 ******* 2026-03-18 05:07:13.800972 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.800978 | orchestrator | 2026-03-18 05:07:13.800985 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-18 05:07:13.800992 | orchestrator | Wednesday 18 March 2026 05:07:09 +0000 (0:00:01.059) 0:23:40.720 ******* 2026-03-18 05:07:13.800998 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.801004 | orchestrator | 2026-03-18 05:07:13.801011 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-18 05:07:13.801018 | orchestrator | Wednesday 18 March 2026 05:07:10 +0000 (0:00:01.291) 0:23:42.011 ******* 2026-03-18 05:07:13.801024 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.801030 | orchestrator | 2026-03-18 05:07:13.801037 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-18 05:07:13.801043 | orchestrator | Wednesday 18 March 2026 05:07:11 +0000 (0:00:01.286) 0:23:43.298 ******* 2026-03-18 05:07:13.801050 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.801057 | orchestrator | 2026-03-18 05:07:13.801063 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-18 05:07:13.801069 | orchestrator | Wednesday 18 March 2026 05:07:12 +0000 (0:00:01.266) 0:23:44.564 ******* 2026-03-18 05:07:13.801076 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:13.801082 | orchestrator | 2026-03-18 05:07:13.801089 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-18 05:07:13.801095 | orchestrator | Wednesday 18 March 2026 05:07:13 +0000 (0:00:00.703) 0:23:45.268 ******* 2026-03-18 05:07:13.801109 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:07:28.526851 | orchestrator | 2026-03-18 05:07:28.526967 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-18 05:07:28.526984 | orchestrator | Wednesday 18 March 2026 05:07:13 +0000 (0:00:00.138) 0:23:45.406 ******* 2026-03-18 05:07:28.526997 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:07:28.527009 | orchestrator | 2026-03-18 05:07:28.527021 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-18 05:07:28.527032 | orchestrator | 2026-03-18 05:07:28.527043 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:07:28.527054 | orchestrator | Wednesday 18 March 2026 05:07:19 +0000 (0:00:05.994) 0:23:51.400 ******* 2026-03-18 05:07:28.527065 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-3 2026-03-18 05:07:28.527077 | orchestrator | 2026-03-18 05:07:28.527087 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:07:28.527098 | orchestrator | Wednesday 18 March 2026 05:07:20 +0000 (0:00:00.425) 0:23:51.826 ******* 2026-03-18 05:07:28.527109 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527120 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527130 | orchestrator | 2026-03-18 05:07:28.527141 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:07:28.527152 | orchestrator | Wednesday 18 March 2026 05:07:21 +0000 (0:00:00.954) 0:23:52.781 ******* 2026-03-18 05:07:28.527163 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527174 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527185 | orchestrator | 2026-03-18 05:07:28.527195 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:07:28.527206 | orchestrator | Wednesday 18 March 2026 05:07:21 +0000 (0:00:00.242) 0:23:53.023 ******* 2026-03-18 05:07:28.527241 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527311 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527323 | orchestrator | 2026-03-18 05:07:28.527333 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:07:28.527344 | orchestrator | Wednesday 18 March 2026 05:07:21 +0000 (0:00:00.560) 0:23:53.584 ******* 2026-03-18 05:07:28.527355 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527366 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527376 | orchestrator | 2026-03-18 05:07:28.527387 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:07:28.527398 | orchestrator | Wednesday 18 March 2026 05:07:22 +0000 (0:00:00.325) 0:23:53.909 ******* 2026-03-18 05:07:28.527409 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527419 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527430 | orchestrator | 2026-03-18 05:07:28.527441 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:07:28.527452 | orchestrator | Wednesday 18 March 2026 05:07:22 +0000 (0:00:00.257) 0:23:54.167 ******* 2026-03-18 05:07:28.527463 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527474 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527485 | orchestrator | 2026-03-18 05:07:28.527495 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:07:28.527506 | orchestrator | Wednesday 18 March 2026 05:07:22 +0000 (0:00:00.272) 0:23:54.440 ******* 2026-03-18 05:07:28.527533 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:28.527546 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:28.527557 | orchestrator | 2026-03-18 05:07:28.527568 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:07:28.527579 | orchestrator | Wednesday 18 March 2026 05:07:23 +0000 (0:00:00.600) 0:23:55.040 ******* 2026-03-18 05:07:28.527590 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527601 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527611 | orchestrator | 2026-03-18 05:07:28.527622 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:07:28.527633 | orchestrator | Wednesday 18 March 2026 05:07:23 +0000 (0:00:00.230) 0:23:55.271 ******* 2026-03-18 05:07:28.527644 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:07:28.527655 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:07:28.527665 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:07:28.527676 | orchestrator | 2026-03-18 05:07:28.527687 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:07:28.527697 | orchestrator | Wednesday 18 March 2026 05:07:24 +0000 (0:00:00.734) 0:23:56.005 ******* 2026-03-18 05:07:28.527708 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:28.527719 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:28.527730 | orchestrator | 2026-03-18 05:07:28.527741 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:07:28.527751 | orchestrator | Wednesday 18 March 2026 05:07:24 +0000 (0:00:00.388) 0:23:56.394 ******* 2026-03-18 05:07:28.527762 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:07:28.527773 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:07:28.527783 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:07:28.527794 | orchestrator | 2026-03-18 05:07:28.527805 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:07:28.527816 | orchestrator | Wednesday 18 March 2026 05:07:26 +0000 (0:00:01.949) 0:23:58.343 ******* 2026-03-18 05:07:28.527827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:07:28.527838 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:07:28.527849 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:07:28.527868 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:28.527879 | orchestrator | 2026-03-18 05:07:28.527889 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:07:28.527900 | orchestrator | Wednesday 18 March 2026 05:07:27 +0000 (0:00:00.436) 0:23:58.780 ******* 2026-03-18 05:07:28.527928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.527943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.527954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.527965 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:28.527976 | orchestrator | 2026-03-18 05:07:28.527987 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:07:28.527998 | orchestrator | Wednesday 18 March 2026 05:07:28 +0000 (0:00:00.959) 0:23:59.739 ******* 2026-03-18 05:07:28.528011 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.528024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.528041 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:28.528052 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:28.528063 | orchestrator | 2026-03-18 05:07:28.528074 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:07:28.528085 | orchestrator | Wednesday 18 March 2026 05:07:28 +0000 (0:00:00.168) 0:23:59.908 ******* 2026-03-18 05:07:28.528098 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:07:25.348769', 'end': '2026-03-18 05:07:25.414861', 'delta': '0:00:00.066092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:07:28.528113 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:07:25.957912', 'end': '2026-03-18 05:07:26.016379', 'delta': '0:00:00.058467', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:07:28.528140 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:07:26.518420', 'end': '2026-03-18 05:07:26.569533', 'delta': '0:00:00.051113', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:07:34.350983 | orchestrator | 2026-03-18 05:07:34.351108 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:07:34.351139 | orchestrator | Wednesday 18 March 2026 05:07:28 +0000 (0:00:00.219) 0:24:00.127 ******* 2026-03-18 05:07:34.351157 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.351170 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.351181 | orchestrator | 2026-03-18 05:07:34.351192 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:07:34.351204 | orchestrator | Wednesday 18 March 2026 05:07:28 +0000 (0:00:00.368) 0:24:00.496 ******* 2026-03-18 05:07:34.351215 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351227 | orchestrator | 2026-03-18 05:07:34.351238 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:07:34.351286 | orchestrator | Wednesday 18 March 2026 05:07:29 +0000 (0:00:00.962) 0:24:01.458 ******* 2026-03-18 05:07:34.351306 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.351325 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.351343 | orchestrator | 2026-03-18 05:07:34.351361 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:07:34.351380 | orchestrator | Wednesday 18 March 2026 05:07:30 +0000 (0:00:00.250) 0:24:01.709 ******* 2026-03-18 05:07:34.351398 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:07:34.351417 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:07:34.351428 | orchestrator | 2026-03-18 05:07:34.351439 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:07:34.351450 | orchestrator | Wednesday 18 March 2026 05:07:31 +0000 (0:00:01.188) 0:24:02.898 ******* 2026-03-18 05:07:34.351461 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.351472 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.351483 | orchestrator | 2026-03-18 05:07:34.351494 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:07:34.351505 | orchestrator | Wednesday 18 March 2026 05:07:31 +0000 (0:00:00.259) 0:24:03.157 ******* 2026-03-18 05:07:34.351519 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351533 | orchestrator | 2026-03-18 05:07:34.351545 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:07:34.351558 | orchestrator | Wednesday 18 March 2026 05:07:31 +0000 (0:00:00.150) 0:24:03.308 ******* 2026-03-18 05:07:34.351570 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351582 | orchestrator | 2026-03-18 05:07:34.351610 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:07:34.351624 | orchestrator | Wednesday 18 March 2026 05:07:31 +0000 (0:00:00.282) 0:24:03.590 ******* 2026-03-18 05:07:34.351660 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351673 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:34.351686 | orchestrator | 2026-03-18 05:07:34.351699 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:07:34.351712 | orchestrator | Wednesday 18 March 2026 05:07:32 +0000 (0:00:00.290) 0:24:03.881 ******* 2026-03-18 05:07:34.351723 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351733 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:34.351744 | orchestrator | 2026-03-18 05:07:34.351755 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:07:34.351766 | orchestrator | Wednesday 18 March 2026 05:07:32 +0000 (0:00:00.218) 0:24:04.099 ******* 2026-03-18 05:07:34.351776 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.351787 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.351798 | orchestrator | 2026-03-18 05:07:34.351808 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:07:34.351819 | orchestrator | Wednesday 18 March 2026 05:07:33 +0000 (0:00:00.605) 0:24:04.705 ******* 2026-03-18 05:07:34.351830 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351841 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:34.351851 | orchestrator | 2026-03-18 05:07:34.351862 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:07:34.351873 | orchestrator | Wednesday 18 March 2026 05:07:33 +0000 (0:00:00.241) 0:24:04.947 ******* 2026-03-18 05:07:34.351884 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.351895 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.351905 | orchestrator | 2026-03-18 05:07:34.351916 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:07:34.351926 | orchestrator | Wednesday 18 March 2026 05:07:33 +0000 (0:00:00.282) 0:24:05.229 ******* 2026-03-18 05:07:34.351937 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.351948 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:34.351959 | orchestrator | 2026-03-18 05:07:34.351969 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:07:34.351981 | orchestrator | Wednesday 18 March 2026 05:07:33 +0000 (0:00:00.236) 0:24:05.466 ******* 2026-03-18 05:07:34.351991 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:34.352002 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:34.352013 | orchestrator | 2026-03-18 05:07:34.352024 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:07:34.352034 | orchestrator | Wednesday 18 March 2026 05:07:34 +0000 (0:00:00.279) 0:24:05.746 ******* 2026-03-18 05:07:34.352047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.352084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}})  2026-03-18 05:07:34.352099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:07:34.352125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}})  2026-03-18 05:07:34.352138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.352151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.352163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:07:34.352176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.352196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:07:34.456596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.456757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}})  2026-03-18 05:07:34.456794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.456812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}})  2026-03-18 05:07:34.456833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}})  2026-03-18 05:07:34.456852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.456893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:07:34.456940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:07:34.456964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}})  2026-03-18 05:07:34.456984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.457003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.457033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.586988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:07:34.587154 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.587167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}})  2026-03-18 05:07:34.587307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}})  2026-03-18 05:07:34.587323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:07:34.587361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:07:34.587400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:07:34.795603 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:34.795703 | orchestrator | 2026-03-18 05:07:34.795719 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:07:34.795732 | orchestrator | Wednesday 18 March 2026 05:07:34 +0000 (0:00:00.451) 0:24:06.197 ******* 2026-03-18 05:07:34.795747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795808 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795891 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.795952 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851181 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851391 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.851480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933756 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933826 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:34.933861 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933890 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:34.933965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:44.732856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:44.733026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:44.733046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:07:44.733059 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733092 | orchestrator | 2026-03-18 05:07:44.733115 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:07:44.733129 | orchestrator | Wednesday 18 March 2026 05:07:35 +0000 (0:00:00.512) 0:24:06.710 ******* 2026-03-18 05:07:44.733140 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:44.733152 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:44.733163 | orchestrator | 2026-03-18 05:07:44.733174 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:07:44.733185 | orchestrator | Wednesday 18 March 2026 05:07:36 +0000 (0:00:00.964) 0:24:07.674 ******* 2026-03-18 05:07:44.733196 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:44.733206 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:44.733217 | orchestrator | 2026-03-18 05:07:44.733228 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:07:44.733239 | orchestrator | Wednesday 18 March 2026 05:07:36 +0000 (0:00:00.238) 0:24:07.912 ******* 2026-03-18 05:07:44.733250 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:44.733309 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:44.733321 | orchestrator | 2026-03-18 05:07:44.733332 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:07:44.733343 | orchestrator | Wednesday 18 March 2026 05:07:36 +0000 (0:00:00.593) 0:24:08.506 ******* 2026-03-18 05:07:44.733354 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733365 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733376 | orchestrator | 2026-03-18 05:07:44.733387 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:07:44.733412 | orchestrator | Wednesday 18 March 2026 05:07:37 +0000 (0:00:00.251) 0:24:08.758 ******* 2026-03-18 05:07:44.733423 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733434 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733445 | orchestrator | 2026-03-18 05:07:44.733456 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:07:44.733476 | orchestrator | Wednesday 18 March 2026 05:07:37 +0000 (0:00:00.364) 0:24:09.122 ******* 2026-03-18 05:07:44.733487 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733498 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733509 | orchestrator | 2026-03-18 05:07:44.733520 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:07:44.733530 | orchestrator | Wednesday 18 March 2026 05:07:37 +0000 (0:00:00.267) 0:24:09.390 ******* 2026-03-18 05:07:44.733541 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 05:07:44.733552 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 05:07:44.733563 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 05:07:44.733573 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 05:07:44.733584 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 05:07:44.733594 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 05:07:44.733605 | orchestrator | 2026-03-18 05:07:44.733616 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:07:44.733627 | orchestrator | Wednesday 18 March 2026 05:07:39 +0000 (0:00:01.516) 0:24:10.906 ******* 2026-03-18 05:07:44.733655 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:07:44.733667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:07:44.733677 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:07:44.733688 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 05:07:44.733709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 05:07:44.733720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 05:07:44.733731 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733742 | orchestrator | 2026-03-18 05:07:44.733753 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:07:44.733764 | orchestrator | Wednesday 18 March 2026 05:07:39 +0000 (0:00:00.287) 0:24:11.194 ******* 2026-03-18 05:07:44.733775 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-3 2026-03-18 05:07:44.733787 | orchestrator | 2026-03-18 05:07:44.733799 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:07:44.733812 | orchestrator | Wednesday 18 March 2026 05:07:39 +0000 (0:00:00.410) 0:24:11.604 ******* 2026-03-18 05:07:44.733823 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733834 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733845 | orchestrator | 2026-03-18 05:07:44.733855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:07:44.733866 | orchestrator | Wednesday 18 March 2026 05:07:40 +0000 (0:00:00.294) 0:24:11.899 ******* 2026-03-18 05:07:44.733877 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733887 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733898 | orchestrator | 2026-03-18 05:07:44.733909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:07:44.733920 | orchestrator | Wednesday 18 March 2026 05:07:40 +0000 (0:00:00.237) 0:24:12.136 ******* 2026-03-18 05:07:44.733930 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.733941 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:44.733952 | orchestrator | 2026-03-18 05:07:44.733963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:07:44.733973 | orchestrator | Wednesday 18 March 2026 05:07:40 +0000 (0:00:00.238) 0:24:12.375 ******* 2026-03-18 05:07:44.733984 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:44.733995 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:44.734006 | orchestrator | 2026-03-18 05:07:44.734072 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:07:44.734085 | orchestrator | Wednesday 18 March 2026 05:07:41 +0000 (0:00:00.709) 0:24:13.085 ******* 2026-03-18 05:07:44.734103 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:07:44.734114 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:07:44.734124 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:07:44.734135 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.734146 | orchestrator | 2026-03-18 05:07:44.734157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:07:44.734167 | orchestrator | Wednesday 18 March 2026 05:07:41 +0000 (0:00:00.426) 0:24:13.511 ******* 2026-03-18 05:07:44.734178 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:07:44.734189 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:07:44.734200 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:07:44.734211 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.734221 | orchestrator | 2026-03-18 05:07:44.734232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:07:44.734243 | orchestrator | Wednesday 18 March 2026 05:07:42 +0000 (0:00:00.423) 0:24:13.935 ******* 2026-03-18 05:07:44.734278 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:07:44.734291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:07:44.734302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:07:44.734313 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:44.734324 | orchestrator | 2026-03-18 05:07:44.734335 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:07:44.734346 | orchestrator | Wednesday 18 March 2026 05:07:42 +0000 (0:00:00.442) 0:24:14.377 ******* 2026-03-18 05:07:44.734362 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:44.734374 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:44.734384 | orchestrator | 2026-03-18 05:07:44.734395 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:07:44.734406 | orchestrator | Wednesday 18 March 2026 05:07:43 +0000 (0:00:00.275) 0:24:14.653 ******* 2026-03-18 05:07:44.734417 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:07:44.734428 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 05:07:44.734439 | orchestrator | 2026-03-18 05:07:44.734450 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:07:44.734461 | orchestrator | Wednesday 18 March 2026 05:07:43 +0000 (0:00:00.472) 0:24:15.125 ******* 2026-03-18 05:07:44.734472 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:07:44.734482 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:07:44.734493 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:07:44.734504 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:07:44.734515 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:07:44.734526 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:07:44.734543 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:07:58.672030 | orchestrator | 2026-03-18 05:07:58.672185 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:07:58.672211 | orchestrator | Wednesday 18 March 2026 05:07:44 +0000 (0:00:01.208) 0:24:16.334 ******* 2026-03-18 05:07:58.672230 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:07:58.672251 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:07:58.672359 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:07:58.672382 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:07:58.672443 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:07:58.672467 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:07:58.672485 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:07:58.672502 | orchestrator | 2026-03-18 05:07:58.672514 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-18 05:07:58.672530 | orchestrator | Wednesday 18 March 2026 05:07:46 +0000 (0:00:01.817) 0:24:18.151 ******* 2026-03-18 05:07:58.672548 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.672571 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.672585 | orchestrator | 2026-03-18 05:07:58.672598 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:07:58.672612 | orchestrator | Wednesday 18 March 2026 05:07:47 +0000 (0:00:00.583) 0:24:18.735 ******* 2026-03-18 05:07:58.672624 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3 2026-03-18 05:07:58.672638 | orchestrator | 2026-03-18 05:07:58.672651 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:07:58.672664 | orchestrator | Wednesday 18 March 2026 05:07:47 +0000 (0:00:00.384) 0:24:19.120 ******* 2026-03-18 05:07:58.672677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-3 2026-03-18 05:07:58.672689 | orchestrator | 2026-03-18 05:07:58.672701 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:07:58.672714 | orchestrator | Wednesday 18 March 2026 05:07:47 +0000 (0:00:00.395) 0:24:19.516 ******* 2026-03-18 05:07:58.672727 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.672741 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.672753 | orchestrator | 2026-03-18 05:07:58.672764 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:07:58.672775 | orchestrator | Wednesday 18 March 2026 05:07:48 +0000 (0:00:00.233) 0:24:19.750 ******* 2026-03-18 05:07:58.672786 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.672797 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.672808 | orchestrator | 2026-03-18 05:07:58.672818 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:07:58.672829 | orchestrator | Wednesday 18 March 2026 05:07:48 +0000 (0:00:00.633) 0:24:20.383 ******* 2026-03-18 05:07:58.672840 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.672851 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.672862 | orchestrator | 2026-03-18 05:07:58.672872 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:07:58.672883 | orchestrator | Wednesday 18 March 2026 05:07:49 +0000 (0:00:00.986) 0:24:21.369 ******* 2026-03-18 05:07:58.672894 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.672905 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.672916 | orchestrator | 2026-03-18 05:07:58.672926 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:07:58.672937 | orchestrator | Wednesday 18 March 2026 05:07:50 +0000 (0:00:00.669) 0:24:22.039 ******* 2026-03-18 05:07:58.672948 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.672959 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.672970 | orchestrator | 2026-03-18 05:07:58.672981 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:07:58.672992 | orchestrator | Wednesday 18 March 2026 05:07:50 +0000 (0:00:00.245) 0:24:22.285 ******* 2026-03-18 05:07:58.673002 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673013 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673024 | orchestrator | 2026-03-18 05:07:58.673051 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:07:58.673063 | orchestrator | Wednesday 18 March 2026 05:07:50 +0000 (0:00:00.261) 0:24:22.546 ******* 2026-03-18 05:07:58.673083 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673095 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673105 | orchestrator | 2026-03-18 05:07:58.673116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:07:58.673127 | orchestrator | Wednesday 18 March 2026 05:07:51 +0000 (0:00:00.234) 0:24:22.780 ******* 2026-03-18 05:07:58.673138 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673149 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673160 | orchestrator | 2026-03-18 05:07:58.673171 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:07:58.673181 | orchestrator | Wednesday 18 March 2026 05:07:51 +0000 (0:00:00.640) 0:24:23.420 ******* 2026-03-18 05:07:58.673192 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673203 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673214 | orchestrator | 2026-03-18 05:07:58.673225 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:07:58.673236 | orchestrator | Wednesday 18 March 2026 05:07:52 +0000 (0:00:00.955) 0:24:24.376 ******* 2026-03-18 05:07:58.673247 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673258 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673295 | orchestrator | 2026-03-18 05:07:58.673306 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:07:58.673317 | orchestrator | Wednesday 18 March 2026 05:07:53 +0000 (0:00:00.244) 0:24:24.621 ******* 2026-03-18 05:07:58.673328 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673363 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673376 | orchestrator | 2026-03-18 05:07:58.673387 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:07:58.673398 | orchestrator | Wednesday 18 March 2026 05:07:53 +0000 (0:00:00.262) 0:24:24.883 ******* 2026-03-18 05:07:58.673409 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673420 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673430 | orchestrator | 2026-03-18 05:07:58.673441 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:07:58.673454 | orchestrator | Wednesday 18 March 2026 05:07:53 +0000 (0:00:00.272) 0:24:25.156 ******* 2026-03-18 05:07:58.673474 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673495 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673514 | orchestrator | 2026-03-18 05:07:58.673532 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:07:58.673543 | orchestrator | Wednesday 18 March 2026 05:07:53 +0000 (0:00:00.251) 0:24:25.407 ******* 2026-03-18 05:07:58.673554 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673565 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673576 | orchestrator | 2026-03-18 05:07:58.673586 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:07:58.673597 | orchestrator | Wednesday 18 March 2026 05:07:54 +0000 (0:00:00.260) 0:24:25.668 ******* 2026-03-18 05:07:58.673608 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673619 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673630 | orchestrator | 2026-03-18 05:07:58.673641 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:07:58.673652 | orchestrator | Wednesday 18 March 2026 05:07:54 +0000 (0:00:00.248) 0:24:25.916 ******* 2026-03-18 05:07:58.673663 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673674 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673690 | orchestrator | 2026-03-18 05:07:58.673709 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:07:58.673721 | orchestrator | Wednesday 18 March 2026 05:07:54 +0000 (0:00:00.558) 0:24:26.474 ******* 2026-03-18 05:07:58.673732 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.673743 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.673754 | orchestrator | 2026-03-18 05:07:58.673764 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:07:58.673778 | orchestrator | Wednesday 18 March 2026 05:07:55 +0000 (0:00:00.261) 0:24:26.736 ******* 2026-03-18 05:07:58.673809 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673827 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673846 | orchestrator | 2026-03-18 05:07:58.673864 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:07:58.673882 | orchestrator | Wednesday 18 March 2026 05:07:55 +0000 (0:00:00.302) 0:24:27.039 ******* 2026-03-18 05:07:58.673898 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:07:58.673917 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:07:58.673936 | orchestrator | 2026-03-18 05:07:58.673955 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:07:58.673973 | orchestrator | Wednesday 18 March 2026 05:07:55 +0000 (0:00:00.394) 0:24:27.433 ******* 2026-03-18 05:07:58.673991 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674011 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674101 | orchestrator | 2026-03-18 05:07:58.674112 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:07:58.674122 | orchestrator | Wednesday 18 March 2026 05:07:56 +0000 (0:00:00.234) 0:24:27.668 ******* 2026-03-18 05:07:58.674133 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674144 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674155 | orchestrator | 2026-03-18 05:07:58.674203 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:07:58.674215 | orchestrator | Wednesday 18 March 2026 05:07:56 +0000 (0:00:00.550) 0:24:28.218 ******* 2026-03-18 05:07:58.674226 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674237 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674247 | orchestrator | 2026-03-18 05:07:58.674258 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:07:58.674297 | orchestrator | Wednesday 18 March 2026 05:07:56 +0000 (0:00:00.261) 0:24:28.479 ******* 2026-03-18 05:07:58.674315 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674335 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674353 | orchestrator | 2026-03-18 05:07:58.674371 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:07:58.674389 | orchestrator | Wednesday 18 March 2026 05:07:57 +0000 (0:00:00.243) 0:24:28.723 ******* 2026-03-18 05:07:58.674420 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674440 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674457 | orchestrator | 2026-03-18 05:07:58.674469 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:07:58.674480 | orchestrator | Wednesday 18 March 2026 05:07:57 +0000 (0:00:00.297) 0:24:29.020 ******* 2026-03-18 05:07:58.674490 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674501 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674512 | orchestrator | 2026-03-18 05:07:58.674522 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:07:58.674533 | orchestrator | Wednesday 18 March 2026 05:07:57 +0000 (0:00:00.230) 0:24:29.250 ******* 2026-03-18 05:07:58.674544 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674555 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674565 | orchestrator | 2026-03-18 05:07:58.674576 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:07:58.674587 | orchestrator | Wednesday 18 March 2026 05:07:57 +0000 (0:00:00.253) 0:24:29.504 ******* 2026-03-18 05:07:58.674598 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674609 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674619 | orchestrator | 2026-03-18 05:07:58.674630 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:07:58.674641 | orchestrator | Wednesday 18 March 2026 05:07:58 +0000 (0:00:00.232) 0:24:29.736 ******* 2026-03-18 05:07:58.674651 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:07:58.674662 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:07:58.674673 | orchestrator | 2026-03-18 05:07:58.674698 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:08:13.568864 | orchestrator | Wednesday 18 March 2026 05:07:58 +0000 (0:00:00.536) 0:24:30.273 ******* 2026-03-18 05:08:13.568979 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.568996 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569008 | orchestrator | 2026-03-18 05:08:13.569020 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:08:13.569031 | orchestrator | Wednesday 18 March 2026 05:07:58 +0000 (0:00:00.242) 0:24:30.516 ******* 2026-03-18 05:08:13.569042 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569053 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569064 | orchestrator | 2026-03-18 05:08:13.569075 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:08:13.569086 | orchestrator | Wednesday 18 March 2026 05:07:59 +0000 (0:00:00.240) 0:24:30.757 ******* 2026-03-18 05:08:13.569097 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569108 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569119 | orchestrator | 2026-03-18 05:08:13.569130 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:08:13.569141 | orchestrator | Wednesday 18 March 2026 05:07:59 +0000 (0:00:00.402) 0:24:31.159 ******* 2026-03-18 05:08:13.569152 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.569164 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.569175 | orchestrator | 2026-03-18 05:08:13.569185 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:08:13.569196 | orchestrator | Wednesday 18 March 2026 05:08:00 +0000 (0:00:01.019) 0:24:32.179 ******* 2026-03-18 05:08:13.569207 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.569219 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.569230 | orchestrator | 2026-03-18 05:08:13.569241 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:08:13.569251 | orchestrator | Wednesday 18 March 2026 05:08:01 +0000 (0:00:01.330) 0:24:33.509 ******* 2026-03-18 05:08:13.569263 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:13.569344 | orchestrator | 2026-03-18 05:08:13.569360 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:08:13.569371 | orchestrator | Wednesday 18 March 2026 05:08:02 +0000 (0:00:00.754) 0:24:34.264 ******* 2026-03-18 05:08:13.569382 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569394 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569408 | orchestrator | 2026-03-18 05:08:13.569421 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:08:13.569434 | orchestrator | Wednesday 18 March 2026 05:08:02 +0000 (0:00:00.271) 0:24:34.535 ******* 2026-03-18 05:08:13.569447 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569459 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569472 | orchestrator | 2026-03-18 05:08:13.569485 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:08:13.569499 | orchestrator | Wednesday 18 March 2026 05:08:03 +0000 (0:00:00.244) 0:24:34.780 ******* 2026-03-18 05:08:13.569511 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:08:13.569525 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:08:13.569538 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:08:13.569551 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:08:13.569563 | orchestrator | 2026-03-18 05:08:13.569576 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:08:13.569589 | orchestrator | Wednesday 18 March 2026 05:08:04 +0000 (0:00:00.908) 0:24:35.688 ******* 2026-03-18 05:08:13.569602 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.569615 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.569628 | orchestrator | 2026-03-18 05:08:13.569670 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:08:13.569683 | orchestrator | Wednesday 18 March 2026 05:08:04 +0000 (0:00:00.572) 0:24:36.260 ******* 2026-03-18 05:08:13.569696 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569708 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569721 | orchestrator | 2026-03-18 05:08:13.569734 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:08:13.569763 | orchestrator | Wednesday 18 March 2026 05:08:05 +0000 (0:00:00.630) 0:24:36.891 ******* 2026-03-18 05:08:13.569774 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569785 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569796 | orchestrator | 2026-03-18 05:08:13.569807 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:08:13.569818 | orchestrator | Wednesday 18 March 2026 05:08:05 +0000 (0:00:00.249) 0:24:37.141 ******* 2026-03-18 05:08:13.569829 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.569840 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.569851 | orchestrator | 2026-03-18 05:08:13.569861 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:08:13.569873 | orchestrator | Wednesday 18 March 2026 05:08:05 +0000 (0:00:00.257) 0:24:37.398 ******* 2026-03-18 05:08:13.569884 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:13.569895 | orchestrator | 2026-03-18 05:08:13.569906 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:08:13.569917 | orchestrator | Wednesday 18 March 2026 05:08:06 +0000 (0:00:00.389) 0:24:37.788 ******* 2026-03-18 05:08:13.569927 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.569939 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.569949 | orchestrator | 2026-03-18 05:08:13.569961 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:08:13.569972 | orchestrator | Wednesday 18 March 2026 05:08:07 +0000 (0:00:00.868) 0:24:38.656 ******* 2026-03-18 05:08:13.569983 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:08:13.570012 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:08:13.570088 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:08:13.570100 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570111 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:08:13.570121 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:08:13.570132 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:08:13.570143 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570154 | orchestrator | 2026-03-18 05:08:13.570165 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:08:13.570176 | orchestrator | Wednesday 18 March 2026 05:08:07 +0000 (0:00:00.291) 0:24:38.948 ******* 2026-03-18 05:08:13.570219 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570232 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570243 | orchestrator | 2026-03-18 05:08:13.570253 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:08:13.570264 | orchestrator | Wednesday 18 March 2026 05:08:07 +0000 (0:00:00.584) 0:24:39.532 ******* 2026-03-18 05:08:13.570295 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570306 | orchestrator | 2026-03-18 05:08:13.570317 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:08:13.570328 | orchestrator | Wednesday 18 March 2026 05:08:08 +0000 (0:00:00.178) 0:24:39.710 ******* 2026-03-18 05:08:13.570338 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570349 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570360 | orchestrator | 2026-03-18 05:08:13.570371 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:08:13.570392 | orchestrator | Wednesday 18 March 2026 05:08:08 +0000 (0:00:00.273) 0:24:39.984 ******* 2026-03-18 05:08:13.570403 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570414 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570425 | orchestrator | 2026-03-18 05:08:13.570435 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:08:13.570446 | orchestrator | Wednesday 18 March 2026 05:08:08 +0000 (0:00:00.264) 0:24:40.248 ******* 2026-03-18 05:08:13.570457 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570468 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570478 | orchestrator | 2026-03-18 05:08:13.570489 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:08:13.570500 | orchestrator | Wednesday 18 March 2026 05:08:08 +0000 (0:00:00.259) 0:24:40.507 ******* 2026-03-18 05:08:13.570510 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.570521 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.570532 | orchestrator | 2026-03-18 05:08:13.570542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:08:13.570553 | orchestrator | Wednesday 18 March 2026 05:08:10 +0000 (0:00:01.536) 0:24:42.043 ******* 2026-03-18 05:08:13.570564 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:13.570575 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:13.570586 | orchestrator | 2026-03-18 05:08:13.570597 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:08:13.570607 | orchestrator | Wednesday 18 March 2026 05:08:10 +0000 (0:00:00.253) 0:24:42.297 ******* 2026-03-18 05:08:13.570618 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:13.570630 | orchestrator | 2026-03-18 05:08:13.570641 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:08:13.570651 | orchestrator | Wednesday 18 March 2026 05:08:11 +0000 (0:00:00.777) 0:24:43.075 ******* 2026-03-18 05:08:13.570662 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570673 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570684 | orchestrator | 2026-03-18 05:08:13.570694 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:08:13.570705 | orchestrator | Wednesday 18 March 2026 05:08:11 +0000 (0:00:00.261) 0:24:43.337 ******* 2026-03-18 05:08:13.570716 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570726 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570737 | orchestrator | 2026-03-18 05:08:13.570748 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:08:13.570764 | orchestrator | Wednesday 18 March 2026 05:08:11 +0000 (0:00:00.252) 0:24:43.589 ******* 2026-03-18 05:08:13.570775 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570786 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570797 | orchestrator | 2026-03-18 05:08:13.570807 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:08:13.570818 | orchestrator | Wednesday 18 March 2026 05:08:12 +0000 (0:00:00.250) 0:24:43.839 ******* 2026-03-18 05:08:13.570829 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570840 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570851 | orchestrator | 2026-03-18 05:08:13.570861 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:08:13.570872 | orchestrator | Wednesday 18 March 2026 05:08:12 +0000 (0:00:00.254) 0:24:44.093 ******* 2026-03-18 05:08:13.570883 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570894 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570904 | orchestrator | 2026-03-18 05:08:13.570915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:08:13.570926 | orchestrator | Wednesday 18 March 2026 05:08:12 +0000 (0:00:00.273) 0:24:44.366 ******* 2026-03-18 05:08:13.570937 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.570955 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.570965 | orchestrator | 2026-03-18 05:08:13.570976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:08:13.570987 | orchestrator | Wednesday 18 March 2026 05:08:13 +0000 (0:00:00.550) 0:24:44.917 ******* 2026-03-18 05:08:13.570997 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:13.571008 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:13.571019 | orchestrator | 2026-03-18 05:08:13.571038 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:08:34.357973 | orchestrator | Wednesday 18 March 2026 05:08:13 +0000 (0:00:00.255) 0:24:45.173 ******* 2026-03-18 05:08:34.358090 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358099 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358103 | orchestrator | 2026-03-18 05:08:34.358108 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:08:34.358112 | orchestrator | Wednesday 18 March 2026 05:08:13 +0000 (0:00:00.262) 0:24:45.436 ******* 2026-03-18 05:08:34.358116 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:34.358121 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:34.358125 | orchestrator | 2026-03-18 05:08:34.358129 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:08:34.358133 | orchestrator | Wednesday 18 March 2026 05:08:14 +0000 (0:00:00.408) 0:24:45.844 ******* 2026-03-18 05:08:34.358138 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:34.358142 | orchestrator | 2026-03-18 05:08:34.358146 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:08:34.358149 | orchestrator | Wednesday 18 March 2026 05:08:14 +0000 (0:00:00.696) 0:24:46.540 ******* 2026-03-18 05:08:34.358153 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-18 05:08:34.358158 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-18 05:08:34.358162 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-18 05:08:34.358165 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-18 05:08:34.358169 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-18 05:08:34.358173 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-18 05:08:34.358177 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-18 05:08:34.358180 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-18 05:08:34.358184 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-18 05:08:34.358188 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-18 05:08:34.358192 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-18 05:08:34.358195 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-18 05:08:34.358199 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-18 05:08:34.358203 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-18 05:08:34.358206 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:08:34.358211 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:08:34.358215 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:08:34.358219 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:08:34.358223 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:08:34.358227 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:08:34.358231 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:08:34.358234 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:08:34.358238 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:08:34.358242 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:08:34.358246 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:08:34.358266 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:08:34.358270 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:08:34.358274 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:08:34.358278 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-18 05:08:34.358282 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-18 05:08:34.358285 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-18 05:08:34.358329 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-18 05:08:34.358336 | orchestrator | 2026-03-18 05:08:34.358343 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:08:34.358358 | orchestrator | Wednesday 18 March 2026 05:08:20 +0000 (0:00:05.647) 0:24:52.188 ******* 2026-03-18 05:08:34.358362 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:34.358366 | orchestrator | 2026-03-18 05:08:34.358370 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:08:34.358374 | orchestrator | Wednesday 18 March 2026 05:08:20 +0000 (0:00:00.389) 0:24:52.577 ******* 2026-03-18 05:08:34.358378 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358384 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358388 | orchestrator | 2026-03-18 05:08:34.358392 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:08:34.358395 | orchestrator | Wednesday 18 March 2026 05:08:21 +0000 (0:00:00.609) 0:24:53.187 ******* 2026-03-18 05:08:34.358399 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358403 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358407 | orchestrator | 2026-03-18 05:08:34.358411 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:08:34.358433 | orchestrator | Wednesday 18 March 2026 05:08:22 +0000 (0:00:01.074) 0:24:54.261 ******* 2026-03-18 05:08:34.358437 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358441 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358445 | orchestrator | 2026-03-18 05:08:34.358449 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:08:34.358452 | orchestrator | Wednesday 18 March 2026 05:08:22 +0000 (0:00:00.231) 0:24:54.492 ******* 2026-03-18 05:08:34.358456 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358460 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358464 | orchestrator | 2026-03-18 05:08:34.358468 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:08:34.358471 | orchestrator | Wednesday 18 March 2026 05:08:23 +0000 (0:00:00.524) 0:24:55.017 ******* 2026-03-18 05:08:34.358475 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358479 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358483 | orchestrator | 2026-03-18 05:08:34.358486 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:08:34.358490 | orchestrator | Wednesday 18 March 2026 05:08:23 +0000 (0:00:00.240) 0:24:55.257 ******* 2026-03-18 05:08:34.358494 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358498 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358502 | orchestrator | 2026-03-18 05:08:34.358506 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:08:34.358510 | orchestrator | Wednesday 18 March 2026 05:08:23 +0000 (0:00:00.252) 0:24:55.510 ******* 2026-03-18 05:08:34.358513 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358522 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358526 | orchestrator | 2026-03-18 05:08:34.358529 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:08:34.358534 | orchestrator | Wednesday 18 March 2026 05:08:24 +0000 (0:00:00.252) 0:24:55.762 ******* 2026-03-18 05:08:34.358537 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358542 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358547 | orchestrator | 2026-03-18 05:08:34.358551 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:08:34.358556 | orchestrator | Wednesday 18 March 2026 05:08:24 +0000 (0:00:00.269) 0:24:56.032 ******* 2026-03-18 05:08:34.358560 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358564 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358569 | orchestrator | 2026-03-18 05:08:34.358573 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:08:34.358578 | orchestrator | Wednesday 18 March 2026 05:08:24 +0000 (0:00:00.232) 0:24:56.264 ******* 2026-03-18 05:08:34.358582 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358587 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358591 | orchestrator | 2026-03-18 05:08:34.358595 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:08:34.358599 | orchestrator | Wednesday 18 March 2026 05:08:24 +0000 (0:00:00.253) 0:24:56.518 ******* 2026-03-18 05:08:34.358604 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358608 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358612 | orchestrator | 2026-03-18 05:08:34.358617 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:08:34.358621 | orchestrator | Wednesday 18 March 2026 05:08:25 +0000 (0:00:00.608) 0:24:57.126 ******* 2026-03-18 05:08:34.358626 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358630 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358634 | orchestrator | 2026-03-18 05:08:34.358639 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:08:34.358643 | orchestrator | Wednesday 18 March 2026 05:08:25 +0000 (0:00:00.274) 0:24:57.400 ******* 2026-03-18 05:08:34.358647 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:34.358652 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:34.358656 | orchestrator | 2026-03-18 05:08:34.358661 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:08:34.358665 | orchestrator | Wednesday 18 March 2026 05:08:26 +0000 (0:00:00.281) 0:24:57.682 ******* 2026-03-18 05:08:34.358670 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:08:34.358674 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:08:34.358678 | orchestrator | 2026-03-18 05:08:34.358685 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:08:34.358689 | orchestrator | Wednesday 18 March 2026 05:08:29 +0000 (0:00:03.857) 0:25:01.539 ******* 2026-03-18 05:08:34.358694 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358699 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:08:34.358703 | orchestrator | 2026-03-18 05:08:34.358707 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:08:34.358712 | orchestrator | Wednesday 18 March 2026 05:08:30 +0000 (0:00:00.310) 0:25:01.849 ******* 2026-03-18 05:08:34.358718 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-18 05:08:34.358732 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-18 05:08:57.773421 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-18 05:08:57.773530 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-18 05:08:57.773544 | orchestrator | 2026-03-18 05:08:57.773556 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:08:57.773567 | orchestrator | Wednesday 18 March 2026 05:08:34 +0000 (0:00:04.108) 0:25:05.957 ******* 2026-03-18 05:08:57.773577 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.773588 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.773598 | orchestrator | 2026-03-18 05:08:57.773608 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:08:57.773617 | orchestrator | Wednesday 18 March 2026 05:08:34 +0000 (0:00:00.572) 0:25:06.530 ******* 2026-03-18 05:08:57.773627 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.773637 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.773646 | orchestrator | 2026-03-18 05:08:57.773657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:08:57.773668 | orchestrator | Wednesday 18 March 2026 05:08:35 +0000 (0:00:00.260) 0:25:06.790 ******* 2026-03-18 05:08:57.773678 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.773687 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.773697 | orchestrator | 2026-03-18 05:08:57.773707 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:08:57.773716 | orchestrator | Wednesday 18 March 2026 05:08:35 +0000 (0:00:00.256) 0:25:07.047 ******* 2026-03-18 05:08:57.773726 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.773736 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.773745 | orchestrator | 2026-03-18 05:08:57.773755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:08:57.773772 | orchestrator | Wednesday 18 March 2026 05:08:35 +0000 (0:00:00.285) 0:25:07.332 ******* 2026-03-18 05:08:57.773817 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.773835 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.773850 | orchestrator | 2026-03-18 05:08:57.773867 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:08:57.773881 | orchestrator | Wednesday 18 March 2026 05:08:35 +0000 (0:00:00.258) 0:25:07.591 ******* 2026-03-18 05:08:57.773896 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.773914 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.773930 | orchestrator | 2026-03-18 05:08:57.773948 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:08:57.773965 | orchestrator | Wednesday 18 March 2026 05:08:36 +0000 (0:00:00.361) 0:25:07.952 ******* 2026-03-18 05:08:57.773983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:08:57.774002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:08:57.774098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:08:57.774124 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.774177 | orchestrator | 2026-03-18 05:08:57.774196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:08:57.774235 | orchestrator | Wednesday 18 March 2026 05:08:37 +0000 (0:00:00.801) 0:25:08.753 ******* 2026-03-18 05:08:57.774267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:08:57.774286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:08:57.774404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:08:57.774430 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.774448 | orchestrator | 2026-03-18 05:08:57.774467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:08:57.774486 | orchestrator | Wednesday 18 March 2026 05:08:37 +0000 (0:00:00.755) 0:25:09.509 ******* 2026-03-18 05:08:57.774506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:08:57.774525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:08:57.774544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:08:57.774562 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.774580 | orchestrator | 2026-03-18 05:08:57.774595 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:08:57.774611 | orchestrator | Wednesday 18 March 2026 05:08:39 +0000 (0:00:01.112) 0:25:10.621 ******* 2026-03-18 05:08:57.774628 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.774645 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.774661 | orchestrator | 2026-03-18 05:08:57.774677 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:08:57.774693 | orchestrator | Wednesday 18 March 2026 05:08:39 +0000 (0:00:00.263) 0:25:10.885 ******* 2026-03-18 05:08:57.774709 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:08:57.774726 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 05:08:57.774743 | orchestrator | 2026-03-18 05:08:57.774760 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:08:57.774776 | orchestrator | Wednesday 18 March 2026 05:08:39 +0000 (0:00:00.664) 0:25:11.549 ******* 2026-03-18 05:08:57.774792 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.774807 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.774823 | orchestrator | 2026-03-18 05:08:57.774867 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-18 05:08:57.774885 | orchestrator | Wednesday 18 March 2026 05:08:40 +0000 (0:00:01.002) 0:25:12.551 ******* 2026-03-18 05:08:57.774901 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.774917 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.774933 | orchestrator | 2026-03-18 05:08:57.774951 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-18 05:08:57.774966 | orchestrator | Wednesday 18 March 2026 05:08:41 +0000 (0:00:00.258) 0:25:12.810 ******* 2026-03-18 05:08:57.774981 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:57.774997 | orchestrator | 2026-03-18 05:08:57.775013 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-18 05:08:57.775028 | orchestrator | Wednesday 18 March 2026 05:08:41 +0000 (0:00:00.688) 0:25:13.498 ******* 2026-03-18 05:08:57.775045 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 05:08:57.775060 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-18 05:08:57.775077 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-18 05:08:57.775093 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-18 05:08:57.775109 | orchestrator | 2026-03-18 05:08:57.775125 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-18 05:08:57.775141 | orchestrator | Wednesday 18 March 2026 05:08:42 +0000 (0:00:00.970) 0:25:14.469 ******* 2026-03-18 05:08:57.775158 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:08:57.775195 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:08:57.775211 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:08:57.775228 | orchestrator | 2026-03-18 05:08:57.775245 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:08:57.775262 | orchestrator | Wednesday 18 March 2026 05:08:45 +0000 (0:00:02.262) 0:25:16.731 ******* 2026-03-18 05:08:57.775277 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-18 05:08:57.775293 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:08:57.775337 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.775354 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-18 05:08:57.775369 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 05:08:57.775385 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.775401 | orchestrator | 2026-03-18 05:08:57.775417 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-18 05:08:57.775433 | orchestrator | Wednesday 18 March 2026 05:08:46 +0000 (0:00:01.054) 0:25:17.785 ******* 2026-03-18 05:08:57.775448 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.775463 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.775478 | orchestrator | 2026-03-18 05:08:57.775494 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-18 05:08:57.775510 | orchestrator | Wednesday 18 March 2026 05:08:46 +0000 (0:00:00.597) 0:25:18.383 ******* 2026-03-18 05:08:57.775525 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.775541 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:08:57.775557 | orchestrator | 2026-03-18 05:08:57.775572 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-18 05:08:57.775587 | orchestrator | Wednesday 18 March 2026 05:08:47 +0000 (0:00:00.250) 0:25:18.633 ******* 2026-03-18 05:08:57.775603 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:57.775620 | orchestrator | 2026-03-18 05:08:57.775636 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-18 05:08:57.775652 | orchestrator | Wednesday 18 March 2026 05:08:47 +0000 (0:00:00.670) 0:25:19.304 ******* 2026-03-18 05:08:57.775668 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3 2026-03-18 05:08:57.775685 | orchestrator | 2026-03-18 05:08:57.775700 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-18 05:08:57.775728 | orchestrator | Wednesday 18 March 2026 05:08:48 +0000 (0:00:00.391) 0:25:19.696 ******* 2026-03-18 05:08:57.775745 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.775761 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.775778 | orchestrator | 2026-03-18 05:08:57.775795 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-18 05:08:57.775812 | orchestrator | Wednesday 18 March 2026 05:08:49 +0000 (0:00:01.181) 0:25:20.878 ******* 2026-03-18 05:08:57.775828 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.775845 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.775861 | orchestrator | 2026-03-18 05:08:57.775877 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-18 05:08:57.775892 | orchestrator | Wednesday 18 March 2026 05:08:50 +0000 (0:00:01.068) 0:25:21.946 ******* 2026-03-18 05:08:57.775905 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.775920 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.775935 | orchestrator | 2026-03-18 05:08:57.775949 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-18 05:08:57.775962 | orchestrator | Wednesday 18 March 2026 05:08:51 +0000 (0:00:01.328) 0:25:23.275 ******* 2026-03-18 05:08:57.775977 | orchestrator | changed: [testbed-node-4] 2026-03-18 05:08:57.775991 | orchestrator | changed: [testbed-node-3] 2026-03-18 05:08:57.776007 | orchestrator | 2026-03-18 05:08:57.776019 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-18 05:08:57.776032 | orchestrator | Wednesday 18 March 2026 05:08:54 +0000 (0:00:02.689) 0:25:25.965 ******* 2026-03-18 05:08:57.776064 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:08:57.776078 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:08:57.776092 | orchestrator | 2026-03-18 05:08:57.776107 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-18 05:08:57.776123 | orchestrator | Wednesday 18 March 2026 05:08:55 +0000 (0:00:00.893) 0:25:26.858 ******* 2026-03-18 05:08:57.776140 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:08:57.776175 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:09:05.403115 | orchestrator | 2026-03-18 05:09:05.403212 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-18 05:09:05.403223 | orchestrator | 2026-03-18 05:09:05.403231 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:09:05.403239 | orchestrator | Wednesday 18 March 2026 05:08:57 +0000 (0:00:02.516) 0:25:29.375 ******* 2026-03-18 05:09:05.403246 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-18 05:09:05.403253 | orchestrator | 2026-03-18 05:09:05.403260 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:09:05.403267 | orchestrator | Wednesday 18 March 2026 05:08:58 +0000 (0:00:00.247) 0:25:29.622 ******* 2026-03-18 05:09:05.403274 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403282 | orchestrator | 2026-03-18 05:09:05.403289 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:09:05.403295 | orchestrator | Wednesday 18 March 2026 05:08:58 +0000 (0:00:00.514) 0:25:30.137 ******* 2026-03-18 05:09:05.403302 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403309 | orchestrator | 2026-03-18 05:09:05.403360 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:09:05.403368 | orchestrator | Wednesday 18 March 2026 05:08:58 +0000 (0:00:00.151) 0:25:30.288 ******* 2026-03-18 05:09:05.403374 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403381 | orchestrator | 2026-03-18 05:09:05.403388 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:09:05.403395 | orchestrator | Wednesday 18 March 2026 05:08:59 +0000 (0:00:00.779) 0:25:31.067 ******* 2026-03-18 05:09:05.403401 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403408 | orchestrator | 2026-03-18 05:09:05.403415 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:09:05.403421 | orchestrator | Wednesday 18 March 2026 05:08:59 +0000 (0:00:00.172) 0:25:31.239 ******* 2026-03-18 05:09:05.403428 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403435 | orchestrator | 2026-03-18 05:09:05.403441 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:09:05.403448 | orchestrator | Wednesday 18 March 2026 05:08:59 +0000 (0:00:00.164) 0:25:31.404 ******* 2026-03-18 05:09:05.403455 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403461 | orchestrator | 2026-03-18 05:09:05.403468 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:09:05.403476 | orchestrator | Wednesday 18 March 2026 05:08:59 +0000 (0:00:00.160) 0:25:31.564 ******* 2026-03-18 05:09:05.403483 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:05.403490 | orchestrator | 2026-03-18 05:09:05.403497 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:09:05.403504 | orchestrator | Wednesday 18 March 2026 05:09:00 +0000 (0:00:00.148) 0:25:31.713 ******* 2026-03-18 05:09:05.403510 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403517 | orchestrator | 2026-03-18 05:09:05.403524 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:09:05.403530 | orchestrator | Wednesday 18 March 2026 05:09:00 +0000 (0:00:00.151) 0:25:31.865 ******* 2026-03-18 05:09:05.403537 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:09:05.403544 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:09:05.403573 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:09:05.403580 | orchestrator | 2026-03-18 05:09:05.403587 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:09:05.403593 | orchestrator | Wednesday 18 March 2026 05:09:01 +0000 (0:00:00.779) 0:25:32.644 ******* 2026-03-18 05:09:05.403600 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:05.403606 | orchestrator | 2026-03-18 05:09:05.403613 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:09:05.403619 | orchestrator | Wednesday 18 March 2026 05:09:01 +0000 (0:00:00.260) 0:25:32.904 ******* 2026-03-18 05:09:05.403626 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:09:05.403643 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:09:05.403650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:09:05.403659 | orchestrator | 2026-03-18 05:09:05.403667 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:09:05.403675 | orchestrator | Wednesday 18 March 2026 05:09:03 +0000 (0:00:02.224) 0:25:35.129 ******* 2026-03-18 05:09:05.403683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 05:09:05.403691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 05:09:05.403699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 05:09:05.403707 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:05.403715 | orchestrator | 2026-03-18 05:09:05.403723 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:09:05.403730 | orchestrator | Wednesday 18 March 2026 05:09:03 +0000 (0:00:00.487) 0:25:35.616 ******* 2026-03-18 05:09:05.403740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403780 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:05.403789 | orchestrator | 2026-03-18 05:09:05.403797 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:09:05.403804 | orchestrator | Wednesday 18 March 2026 05:09:04 +0000 (0:00:00.972) 0:25:36.589 ******* 2026-03-18 05:09:05.403813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:05.403840 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:05.403847 | orchestrator | 2026-03-18 05:09:05.403854 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:09:05.403860 | orchestrator | Wednesday 18 March 2026 05:09:05 +0000 (0:00:00.188) 0:25:36.777 ******* 2026-03-18 05:09:05.403869 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:09:01.828997', 'end': '2026-03-18 05:09:01.886198', 'delta': '0:00:00.057201', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:09:05.403882 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:09:02.442245', 'end': '2026-03-18 05:09:02.491827', 'delta': '0:00:00.049582', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:09:05.403890 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:09:03.306465', 'end': '2026-03-18 05:09:03.358270', 'delta': '0:00:00.051805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:09:05.403897 | orchestrator | 2026-03-18 05:09:05.403908 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:09:10.028244 | orchestrator | Wednesday 18 March 2026 05:09:05 +0000 (0:00:00.234) 0:25:37.012 ******* 2026-03-18 05:09:10.028421 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.028442 | orchestrator | 2026-03-18 05:09:10.028455 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:09:10.028467 | orchestrator | Wednesday 18 March 2026 05:09:06 +0000 (0:00:01.000) 0:25:38.013 ******* 2026-03-18 05:09:10.028478 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028492 | orchestrator | 2026-03-18 05:09:10.028503 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:09:10.028514 | orchestrator | Wednesday 18 March 2026 05:09:06 +0000 (0:00:00.285) 0:25:38.298 ******* 2026-03-18 05:09:10.028525 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.028536 | orchestrator | 2026-03-18 05:09:10.028547 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:09:10.028558 | orchestrator | Wednesday 18 March 2026 05:09:06 +0000 (0:00:00.152) 0:25:38.450 ******* 2026-03-18 05:09:10.028594 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:09:10.028606 | orchestrator | 2026-03-18 05:09:10.028616 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:09:10.028627 | orchestrator | Wednesday 18 March 2026 05:09:07 +0000 (0:00:00.980) 0:25:39.431 ******* 2026-03-18 05:09:10.028638 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.028649 | orchestrator | 2026-03-18 05:09:10.028660 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:09:10.028671 | orchestrator | Wednesday 18 March 2026 05:09:07 +0000 (0:00:00.186) 0:25:39.617 ******* 2026-03-18 05:09:10.028682 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028693 | orchestrator | 2026-03-18 05:09:10.028703 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:09:10.028714 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.135) 0:25:39.752 ******* 2026-03-18 05:09:10.028725 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028736 | orchestrator | 2026-03-18 05:09:10.028747 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:09:10.028758 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.252) 0:25:40.005 ******* 2026-03-18 05:09:10.028770 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028784 | orchestrator | 2026-03-18 05:09:10.028797 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:09:10.028810 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.140) 0:25:40.145 ******* 2026-03-18 05:09:10.028823 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028835 | orchestrator | 2026-03-18 05:09:10.028848 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:09:10.028860 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.126) 0:25:40.272 ******* 2026-03-18 05:09:10.028873 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.028886 | orchestrator | 2026-03-18 05:09:10.028899 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:09:10.028912 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.172) 0:25:40.444 ******* 2026-03-18 05:09:10.028924 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.028937 | orchestrator | 2026-03-18 05:09:10.028950 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:09:10.028961 | orchestrator | Wednesday 18 March 2026 05:09:08 +0000 (0:00:00.140) 0:25:40.585 ******* 2026-03-18 05:09:10.028972 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.028982 | orchestrator | 2026-03-18 05:09:10.028993 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:09:10.029004 | orchestrator | Wednesday 18 March 2026 05:09:09 +0000 (0:00:00.200) 0:25:40.785 ******* 2026-03-18 05:09:10.029015 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.029026 | orchestrator | 2026-03-18 05:09:10.029036 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:09:10.029048 | orchestrator | Wednesday 18 March 2026 05:09:09 +0000 (0:00:00.145) 0:25:40.930 ******* 2026-03-18 05:09:10.029059 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:10.029070 | orchestrator | 2026-03-18 05:09:10.029095 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:09:10.029106 | orchestrator | Wednesday 18 March 2026 05:09:09 +0000 (0:00:00.473) 0:25:41.404 ******* 2026-03-18 05:09:10.029119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.029136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}})  2026-03-18 05:09:10.029177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:09:10.029192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}})  2026-03-18 05:09:10.029204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.029215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.029233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:09:10.029245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.029263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:09:10.029283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.408053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}})  2026-03-18 05:09:10.408154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}})  2026-03-18 05:09:10.408172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.408208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:09:10.408260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.408273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:09:10.408286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:09:10.408299 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:10.408312 | orchestrator | 2026-03-18 05:09:10.408370 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:09:10.408392 | orchestrator | Wednesday 18 March 2026 05:09:10 +0000 (0:00:00.376) 0:25:41.780 ******* 2026-03-18 05:09:10.408413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.408445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a', 'dm-uuid-LVM-OBDgCO1TfJO26KZndmcG4XUfdlxxEe11eqb03b1R3TiAd5BAik4vvOnTIot4pXZ1'], 'uuids': ['55d52066-97cb-48c1-a9a5-651ff762c061'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.408478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa', 'scsi-SQEMU_QEMU_HARDDISK_26f175df-aba2-4da2-ab55-e525c2d3b7aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '26f175df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.408502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hyLInL-qBmT-hkMu-ewvD-iGD6-c0uQ-hDScLy', 'scsi-0QEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768', 'scsi-SQEMU_QEMU_HARDDISK_3c07f10e-07ed-4136-af5a-52ab111aa768'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.581913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.581986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582055 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582060 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3', 'dm-uuid-CRYPT-LUKS2-7a3d4fd16bbc483aab118d6b9a67b0a4-TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--dcb28020--3d32--5af4--a4b7--0acc667eefcb-osd--block--dcb28020--3d32--5af4--a4b7--0acc667eefcb', 'dm-uuid-LVM-W5EL6s0cOZukCJgJFLnUeUfZF3v581ieTXRD31C4XH2D2TZlGP7o3YPUberRNbx3'], 'uuids': ['7a3d4fd1-6bbc-483a-ab11-8d6b9a67b0a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3c07f10e', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TXRD31-C4XH-2D2T-ZlGP-7o3Y-PUbe-rRNbx3']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-w7AXxM-UwrZ-P6aH-00LI-mMT0-kFYy-HZNbAJ', 'scsi-0QEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e', 'scsi-SQEMU_QEMU_HARDDISK_ebabc839-a277-44fc-abeb-49fc313c2e1e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ebabc839', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--9a3797da--ebdd--566a--aa35--3713ec7e039a-osd--block--9a3797da--ebdd--566a--aa35--3713ec7e039a']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:10.582108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1c5784ed', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1', 'scsi-SQEMU_QEMU_HARDDISK_1c5784ed-a5cf-4a45-b5e2-476691d23561-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:19.476974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:19.477108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:19.477147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1', 'dm-uuid-CRYPT-LUKS2-55d5206697cb48c1a9a5651ff762c061-eqb03b-1R3T-iAd5-BAik-4vvO-nTIo-t4pXZ1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:09:19.477161 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477176 | orchestrator | 2026-03-18 05:09:19.477190 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:09:19.477202 | orchestrator | Wednesday 18 March 2026 05:09:10 +0000 (0:00:00.409) 0:25:42.190 ******* 2026-03-18 05:09:19.477214 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:19.477226 | orchestrator | 2026-03-18 05:09:19.477238 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:09:19.477249 | orchestrator | Wednesday 18 March 2026 05:09:11 +0000 (0:00:00.489) 0:25:42.679 ******* 2026-03-18 05:09:19.477261 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:19.477272 | orchestrator | 2026-03-18 05:09:19.477284 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:09:19.477295 | orchestrator | Wednesday 18 March 2026 05:09:11 +0000 (0:00:00.159) 0:25:42.839 ******* 2026-03-18 05:09:19.477307 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:19.477318 | orchestrator | 2026-03-18 05:09:19.477380 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:09:19.477391 | orchestrator | Wednesday 18 March 2026 05:09:11 +0000 (0:00:00.456) 0:25:43.295 ******* 2026-03-18 05:09:19.477402 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477413 | orchestrator | 2026-03-18 05:09:19.477424 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:09:19.477434 | orchestrator | Wednesday 18 March 2026 05:09:11 +0000 (0:00:00.126) 0:25:43.422 ******* 2026-03-18 05:09:19.477445 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477455 | orchestrator | 2026-03-18 05:09:19.477466 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:09:19.477477 | orchestrator | Wednesday 18 March 2026 05:09:12 +0000 (0:00:00.266) 0:25:43.688 ******* 2026-03-18 05:09:19.477487 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477498 | orchestrator | 2026-03-18 05:09:19.477510 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:09:19.477523 | orchestrator | Wednesday 18 March 2026 05:09:12 +0000 (0:00:00.157) 0:25:43.846 ******* 2026-03-18 05:09:19.477536 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-18 05:09:19.477548 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-18 05:09:19.477560 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-18 05:09:19.477574 | orchestrator | 2026-03-18 05:09:19.477586 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:09:19.477598 | orchestrator | Wednesday 18 March 2026 05:09:13 +0000 (0:00:01.010) 0:25:44.857 ******* 2026-03-18 05:09:19.477611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-18 05:09:19.477624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-18 05:09:19.477645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-18 05:09:19.477658 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477671 | orchestrator | 2026-03-18 05:09:19.477683 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:09:19.477696 | orchestrator | Wednesday 18 March 2026 05:09:13 +0000 (0:00:00.186) 0:25:45.043 ******* 2026-03-18 05:09:19.477725 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-18 05:09:19.477739 | orchestrator | 2026-03-18 05:09:19.477754 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:09:19.477768 | orchestrator | Wednesday 18 March 2026 05:09:13 +0000 (0:00:00.231) 0:25:45.275 ******* 2026-03-18 05:09:19.477780 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477793 | orchestrator | 2026-03-18 05:09:19.477806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:09:19.477818 | orchestrator | Wednesday 18 March 2026 05:09:14 +0000 (0:00:00.463) 0:25:45.739 ******* 2026-03-18 05:09:19.477831 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477843 | orchestrator | 2026-03-18 05:09:19.477856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:09:19.477868 | orchestrator | Wednesday 18 March 2026 05:09:14 +0000 (0:00:00.164) 0:25:45.903 ******* 2026-03-18 05:09:19.477878 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.477889 | orchestrator | 2026-03-18 05:09:19.477899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:09:19.477910 | orchestrator | Wednesday 18 March 2026 05:09:14 +0000 (0:00:00.173) 0:25:46.077 ******* 2026-03-18 05:09:19.477921 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:19.477931 | orchestrator | 2026-03-18 05:09:19.477942 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:09:19.477952 | orchestrator | Wednesday 18 March 2026 05:09:14 +0000 (0:00:00.300) 0:25:46.378 ******* 2026-03-18 05:09:19.477969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:09:19.477980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:09:19.477991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:09:19.478002 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.478013 | orchestrator | 2026-03-18 05:09:19.478090 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:09:19.478101 | orchestrator | Wednesday 18 March 2026 05:09:15 +0000 (0:00:00.451) 0:25:46.829 ******* 2026-03-18 05:09:19.478112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:09:19.478123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:09:19.478134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:09:19.478145 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.478155 | orchestrator | 2026-03-18 05:09:19.478166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:09:19.478177 | orchestrator | Wednesday 18 March 2026 05:09:15 +0000 (0:00:00.418) 0:25:47.248 ******* 2026-03-18 05:09:19.478188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:09:19.478198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:09:19.478209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:09:19.478220 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:19.478231 | orchestrator | 2026-03-18 05:09:19.478242 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:09:19.478253 | orchestrator | Wednesday 18 March 2026 05:09:16 +0000 (0:00:00.423) 0:25:47.672 ******* 2026-03-18 05:09:19.478264 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:19.478274 | orchestrator | 2026-03-18 05:09:19.478285 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:09:19.478304 | orchestrator | Wednesday 18 March 2026 05:09:16 +0000 (0:00:00.171) 0:25:47.843 ******* 2026-03-18 05:09:19.478315 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 05:09:19.478396 | orchestrator | 2026-03-18 05:09:19.478409 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:09:19.478420 | orchestrator | Wednesday 18 March 2026 05:09:16 +0000 (0:00:00.349) 0:25:48.193 ******* 2026-03-18 05:09:19.478431 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:09:19.478442 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:09:19.478452 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:09:19.478463 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 05:09:19.478474 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:09:19.478485 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:09:19.478496 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:09:19.478507 | orchestrator | 2026-03-18 05:09:19.478517 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:09:19.478528 | orchestrator | Wednesday 18 March 2026 05:09:17 +0000 (0:00:01.152) 0:25:49.346 ******* 2026-03-18 05:09:19.478539 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:09:19.478550 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:09:19.478560 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:09:19.478571 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-18 05:09:19.478582 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:09:19.478593 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:09:19.478604 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:09:19.478615 | orchestrator | 2026-03-18 05:09:19.478634 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-18 05:09:35.214576 | orchestrator | Wednesday 18 March 2026 05:09:19 +0000 (0:00:01.732) 0:25:51.078 ******* 2026-03-18 05:09:35.214693 | orchestrator | changed: [testbed-node-3] 2026-03-18 05:09:35.214710 | orchestrator | 2026-03-18 05:09:35.214721 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-18 05:09:35.214731 | orchestrator | Wednesday 18 March 2026 05:09:20 +0000 (0:00:01.276) 0:25:52.355 ******* 2026-03-18 05:09:35.214760 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:09:35.214772 | orchestrator | 2026-03-18 05:09:35.214782 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-18 05:09:35.214792 | orchestrator | Wednesday 18 March 2026 05:09:22 +0000 (0:00:02.222) 0:25:54.578 ******* 2026-03-18 05:09:35.214802 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:09:35.214812 | orchestrator | 2026-03-18 05:09:35.214822 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:09:35.214832 | orchestrator | Wednesday 18 March 2026 05:09:24 +0000 (0:00:01.301) 0:25:55.880 ******* 2026-03-18 05:09:35.214842 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-18 05:09:35.214860 | orchestrator | 2026-03-18 05:09:35.214874 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:09:35.214899 | orchestrator | Wednesday 18 March 2026 05:09:24 +0000 (0:00:00.202) 0:25:56.082 ******* 2026-03-18 05:09:35.214909 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-18 05:09:35.214939 | orchestrator | 2026-03-18 05:09:35.214950 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:09:35.214960 | orchestrator | Wednesday 18 March 2026 05:09:24 +0000 (0:00:00.219) 0:25:56.302 ******* 2026-03-18 05:09:35.214969 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.214979 | orchestrator | 2026-03-18 05:09:35.214989 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:09:35.214999 | orchestrator | Wednesday 18 March 2026 05:09:24 +0000 (0:00:00.142) 0:25:56.445 ******* 2026-03-18 05:09:35.215008 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215019 | orchestrator | 2026-03-18 05:09:35.215029 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:09:35.215038 | orchestrator | Wednesday 18 March 2026 05:09:25 +0000 (0:00:00.552) 0:25:56.998 ******* 2026-03-18 05:09:35.215048 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215057 | orchestrator | 2026-03-18 05:09:35.215067 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:09:35.215077 | orchestrator | Wednesday 18 March 2026 05:09:25 +0000 (0:00:00.563) 0:25:57.561 ******* 2026-03-18 05:09:35.215086 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215100 | orchestrator | 2026-03-18 05:09:35.215117 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:09:35.215128 | orchestrator | Wednesday 18 March 2026 05:09:26 +0000 (0:00:00.526) 0:25:58.088 ******* 2026-03-18 05:09:35.215140 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215151 | orchestrator | 2026-03-18 05:09:35.215163 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:09:35.215174 | orchestrator | Wednesday 18 March 2026 05:09:26 +0000 (0:00:00.142) 0:25:58.230 ******* 2026-03-18 05:09:35.215185 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215196 | orchestrator | 2026-03-18 05:09:35.215207 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:09:35.215219 | orchestrator | Wednesday 18 March 2026 05:09:26 +0000 (0:00:00.140) 0:25:58.371 ******* 2026-03-18 05:09:35.215229 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215241 | orchestrator | 2026-03-18 05:09:35.215252 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:09:35.215263 | orchestrator | Wednesday 18 March 2026 05:09:26 +0000 (0:00:00.163) 0:25:58.534 ******* 2026-03-18 05:09:35.215274 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215284 | orchestrator | 2026-03-18 05:09:35.215295 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:09:35.215306 | orchestrator | Wednesday 18 March 2026 05:09:27 +0000 (0:00:00.565) 0:25:59.100 ******* 2026-03-18 05:09:35.215317 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215328 | orchestrator | 2026-03-18 05:09:35.215367 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:09:35.215386 | orchestrator | Wednesday 18 March 2026 05:09:28 +0000 (0:00:01.218) 0:26:00.319 ******* 2026-03-18 05:09:35.215404 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215420 | orchestrator | 2026-03-18 05:09:35.215433 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:09:35.215450 | orchestrator | Wednesday 18 March 2026 05:09:28 +0000 (0:00:00.138) 0:26:00.457 ******* 2026-03-18 05:09:35.215465 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215475 | orchestrator | 2026-03-18 05:09:35.215485 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:09:35.215494 | orchestrator | Wednesday 18 March 2026 05:09:28 +0000 (0:00:00.138) 0:26:00.596 ******* 2026-03-18 05:09:35.215504 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215513 | orchestrator | 2026-03-18 05:09:35.215523 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:09:35.215532 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.170) 0:26:00.766 ******* 2026-03-18 05:09:35.215550 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215560 | orchestrator | 2026-03-18 05:09:35.215569 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:09:35.215579 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.166) 0:26:00.933 ******* 2026-03-18 05:09:35.215588 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215598 | orchestrator | 2026-03-18 05:09:35.215623 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:09:35.215633 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.170) 0:26:01.103 ******* 2026-03-18 05:09:35.215643 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215652 | orchestrator | 2026-03-18 05:09:35.215662 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:09:35.215672 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.134) 0:26:01.238 ******* 2026-03-18 05:09:35.215681 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215691 | orchestrator | 2026-03-18 05:09:35.215700 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:09:35.215710 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.151) 0:26:01.390 ******* 2026-03-18 05:09:35.215719 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215729 | orchestrator | 2026-03-18 05:09:35.215738 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:09:35.215748 | orchestrator | Wednesday 18 March 2026 05:09:29 +0000 (0:00:00.141) 0:26:01.531 ******* 2026-03-18 05:09:35.215758 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215767 | orchestrator | 2026-03-18 05:09:35.215776 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:09:35.215786 | orchestrator | Wednesday 18 March 2026 05:09:30 +0000 (0:00:00.178) 0:26:01.709 ******* 2026-03-18 05:09:35.215795 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.215805 | orchestrator | 2026-03-18 05:09:35.215814 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:09:35.215824 | orchestrator | Wednesday 18 March 2026 05:09:30 +0000 (0:00:00.270) 0:26:01.979 ******* 2026-03-18 05:09:35.215833 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215843 | orchestrator | 2026-03-18 05:09:35.215858 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:09:35.215868 | orchestrator | Wednesday 18 March 2026 05:09:30 +0000 (0:00:00.170) 0:26:02.150 ******* 2026-03-18 05:09:35.215877 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215887 | orchestrator | 2026-03-18 05:09:35.215896 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:09:35.215906 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.471) 0:26:02.621 ******* 2026-03-18 05:09:35.215915 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215925 | orchestrator | 2026-03-18 05:09:35.215935 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:09:35.215944 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.158) 0:26:02.779 ******* 2026-03-18 05:09:35.215954 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.215963 | orchestrator | 2026-03-18 05:09:35.215972 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:09:35.215982 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.150) 0:26:02.929 ******* 2026-03-18 05:09:35.215991 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216001 | orchestrator | 2026-03-18 05:09:35.216010 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:09:35.216020 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.142) 0:26:03.072 ******* 2026-03-18 05:09:35.216029 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216039 | orchestrator | 2026-03-18 05:09:35.216048 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:09:35.216058 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.136) 0:26:03.208 ******* 2026-03-18 05:09:35.216074 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216083 | orchestrator | 2026-03-18 05:09:35.216093 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:09:35.216103 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.138) 0:26:03.347 ******* 2026-03-18 05:09:35.216113 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216122 | orchestrator | 2026-03-18 05:09:35.216132 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:09:35.216141 | orchestrator | Wednesday 18 March 2026 05:09:31 +0000 (0:00:00.141) 0:26:03.488 ******* 2026-03-18 05:09:35.216151 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216160 | orchestrator | 2026-03-18 05:09:35.216172 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:09:35.216188 | orchestrator | Wednesday 18 March 2026 05:09:32 +0000 (0:00:00.139) 0:26:03.627 ******* 2026-03-18 05:09:35.216203 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216219 | orchestrator | 2026-03-18 05:09:35.216234 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:09:35.216249 | orchestrator | Wednesday 18 March 2026 05:09:32 +0000 (0:00:00.130) 0:26:03.758 ******* 2026-03-18 05:09:35.216265 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216280 | orchestrator | 2026-03-18 05:09:35.216296 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:09:35.216311 | orchestrator | Wednesday 18 March 2026 05:09:32 +0000 (0:00:00.140) 0:26:03.899 ******* 2026-03-18 05:09:35.216327 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:35.216382 | orchestrator | 2026-03-18 05:09:35.216398 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:09:35.216409 | orchestrator | Wednesday 18 March 2026 05:09:32 +0000 (0:00:00.219) 0:26:04.119 ******* 2026-03-18 05:09:35.216418 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.216428 | orchestrator | 2026-03-18 05:09:35.216438 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:09:35.216447 | orchestrator | Wednesday 18 March 2026 05:09:33 +0000 (0:00:00.934) 0:26:05.053 ******* 2026-03-18 05:09:35.216456 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:35.216466 | orchestrator | 2026-03-18 05:09:35.216476 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:09:35.216485 | orchestrator | Wednesday 18 March 2026 05:09:34 +0000 (0:00:01.535) 0:26:06.589 ******* 2026-03-18 05:09:35.216495 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-18 05:09:35.216504 | orchestrator | 2026-03-18 05:09:35.216514 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:09:35.216533 | orchestrator | Wednesday 18 March 2026 05:09:35 +0000 (0:00:00.224) 0:26:06.813 ******* 2026-03-18 05:09:50.735873 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.735984 | orchestrator | 2026-03-18 05:09:50.736001 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:09:50.736014 | orchestrator | Wednesday 18 March 2026 05:09:35 +0000 (0:00:00.205) 0:26:07.019 ******* 2026-03-18 05:09:50.736025 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736036 | orchestrator | 2026-03-18 05:09:50.736047 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:09:50.736058 | orchestrator | Wednesday 18 March 2026 05:09:35 +0000 (0:00:00.147) 0:26:07.166 ******* 2026-03-18 05:09:50.736069 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:09:50.736080 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:09:50.736092 | orchestrator | 2026-03-18 05:09:50.736103 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:09:50.736114 | orchestrator | Wednesday 18 March 2026 05:09:36 +0000 (0:00:00.808) 0:26:07.974 ******* 2026-03-18 05:09:50.736125 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:50.736161 | orchestrator | 2026-03-18 05:09:50.736173 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:09:50.736184 | orchestrator | Wednesday 18 March 2026 05:09:36 +0000 (0:00:00.494) 0:26:08.469 ******* 2026-03-18 05:09:50.736195 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736206 | orchestrator | 2026-03-18 05:09:50.736217 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:09:50.736241 | orchestrator | Wednesday 18 March 2026 05:09:37 +0000 (0:00:00.158) 0:26:08.627 ******* 2026-03-18 05:09:50.736252 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736263 | orchestrator | 2026-03-18 05:09:50.736274 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:09:50.736284 | orchestrator | Wednesday 18 March 2026 05:09:37 +0000 (0:00:00.160) 0:26:08.788 ******* 2026-03-18 05:09:50.736295 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736306 | orchestrator | 2026-03-18 05:09:50.736316 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:09:50.736327 | orchestrator | Wednesday 18 March 2026 05:09:37 +0000 (0:00:00.144) 0:26:08.933 ******* 2026-03-18 05:09:50.736338 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-18 05:09:50.736420 | orchestrator | 2026-03-18 05:09:50.736436 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:09:50.736448 | orchestrator | Wednesday 18 March 2026 05:09:37 +0000 (0:00:00.224) 0:26:09.158 ******* 2026-03-18 05:09:50.736462 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:50.736475 | orchestrator | 2026-03-18 05:09:50.736514 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:09:50.736528 | orchestrator | Wednesday 18 March 2026 05:09:38 +0000 (0:00:00.656) 0:26:09.815 ******* 2026-03-18 05:09:50.736546 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:09:50.736565 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:09:50.736583 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:09:50.736601 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736619 | orchestrator | 2026-03-18 05:09:50.736638 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:09:50.736656 | orchestrator | Wednesday 18 March 2026 05:09:38 +0000 (0:00:00.461) 0:26:10.276 ******* 2026-03-18 05:09:50.736674 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736693 | orchestrator | 2026-03-18 05:09:50.736712 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:09:50.736731 | orchestrator | Wednesday 18 March 2026 05:09:38 +0000 (0:00:00.135) 0:26:10.412 ******* 2026-03-18 05:09:50.736748 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736759 | orchestrator | 2026-03-18 05:09:50.736770 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:09:50.736781 | orchestrator | Wednesday 18 March 2026 05:09:39 +0000 (0:00:00.213) 0:26:10.626 ******* 2026-03-18 05:09:50.736791 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736802 | orchestrator | 2026-03-18 05:09:50.736813 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:09:50.736824 | orchestrator | Wednesday 18 March 2026 05:09:39 +0000 (0:00:00.150) 0:26:10.776 ******* 2026-03-18 05:09:50.736834 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736845 | orchestrator | 2026-03-18 05:09:50.736856 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:09:50.736866 | orchestrator | Wednesday 18 March 2026 05:09:39 +0000 (0:00:00.162) 0:26:10.938 ******* 2026-03-18 05:09:50.736877 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.736888 | orchestrator | 2026-03-18 05:09:50.736899 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:09:50.736909 | orchestrator | Wednesday 18 March 2026 05:09:39 +0000 (0:00:00.159) 0:26:11.097 ******* 2026-03-18 05:09:50.736932 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:50.736944 | orchestrator | 2026-03-18 05:09:50.736955 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:09:50.736965 | orchestrator | Wednesday 18 March 2026 05:09:40 +0000 (0:00:01.500) 0:26:12.598 ******* 2026-03-18 05:09:50.736976 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:50.736987 | orchestrator | 2026-03-18 05:09:50.736998 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:09:50.737008 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.146) 0:26:12.744 ******* 2026-03-18 05:09:50.737019 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-18 05:09:50.737030 | orchestrator | 2026-03-18 05:09:50.737040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:09:50.737071 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.234) 0:26:12.979 ******* 2026-03-18 05:09:50.737083 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737093 | orchestrator | 2026-03-18 05:09:50.737104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:09:50.737115 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.160) 0:26:13.139 ******* 2026-03-18 05:09:50.737126 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737142 | orchestrator | 2026-03-18 05:09:50.737160 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:09:50.737178 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.152) 0:26:13.292 ******* 2026-03-18 05:09:50.737196 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737214 | orchestrator | 2026-03-18 05:09:50.737231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:09:50.737249 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.151) 0:26:13.443 ******* 2026-03-18 05:09:50.737269 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737286 | orchestrator | 2026-03-18 05:09:50.737302 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:09:50.737313 | orchestrator | Wednesday 18 March 2026 05:09:41 +0000 (0:00:00.144) 0:26:13.588 ******* 2026-03-18 05:09:50.737324 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737334 | orchestrator | 2026-03-18 05:09:50.737368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:09:50.737381 | orchestrator | Wednesday 18 March 2026 05:09:42 +0000 (0:00:00.484) 0:26:14.072 ******* 2026-03-18 05:09:50.737392 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737403 | orchestrator | 2026-03-18 05:09:50.737423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:09:50.737434 | orchestrator | Wednesday 18 March 2026 05:09:42 +0000 (0:00:00.159) 0:26:14.231 ******* 2026-03-18 05:09:50.737445 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737456 | orchestrator | 2026-03-18 05:09:50.737466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:09:50.737477 | orchestrator | Wednesday 18 March 2026 05:09:42 +0000 (0:00:00.171) 0:26:14.403 ******* 2026-03-18 05:09:50.737488 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:09:50.737498 | orchestrator | 2026-03-18 05:09:50.737509 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:09:50.737519 | orchestrator | Wednesday 18 March 2026 05:09:42 +0000 (0:00:00.162) 0:26:14.566 ******* 2026-03-18 05:09:50.737530 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:09:50.737541 | orchestrator | 2026-03-18 05:09:50.737551 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:09:50.737562 | orchestrator | Wednesday 18 March 2026 05:09:43 +0000 (0:00:00.248) 0:26:14.814 ******* 2026-03-18 05:09:50.737573 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-18 05:09:50.737584 | orchestrator | 2026-03-18 05:09:50.737607 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:09:50.737618 | orchestrator | Wednesday 18 March 2026 05:09:43 +0000 (0:00:00.209) 0:26:15.024 ******* 2026-03-18 05:09:50.737629 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-18 05:09:50.737640 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-18 05:09:50.737651 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-18 05:09:50.737662 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-18 05:09:50.737672 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-18 05:09:50.737683 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-18 05:09:50.737694 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-18 05:09:50.737705 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:09:50.737716 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:09:50.737727 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:09:50.737737 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:09:50.737748 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:09:50.737758 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:09:50.737769 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:09:50.737780 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-18 05:09:50.737790 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-18 05:09:50.737801 | orchestrator | 2026-03-18 05:09:50.737812 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:09:50.737822 | orchestrator | Wednesday 18 March 2026 05:09:48 +0000 (0:00:05.578) 0:26:20.602 ******* 2026-03-18 05:09:50.737833 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-18 05:09:50.737844 | orchestrator | 2026-03-18 05:09:50.737855 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:09:50.737866 | orchestrator | Wednesday 18 March 2026 05:09:49 +0000 (0:00:00.223) 0:26:20.826 ******* 2026-03-18 05:09:50.737877 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:09:50.737889 | orchestrator | 2026-03-18 05:09:50.737900 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:09:50.737910 | orchestrator | Wednesday 18 March 2026 05:09:49 +0000 (0:00:00.530) 0:26:21.357 ******* 2026-03-18 05:09:50.737921 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:09:50.737932 | orchestrator | 2026-03-18 05:09:50.737943 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:09:50.737962 | orchestrator | Wednesday 18 March 2026 05:09:50 +0000 (0:00:00.978) 0:26:22.335 ******* 2026-03-18 05:10:10.698065 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698166 | orchestrator | 2026-03-18 05:10:10.698179 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:10:10.698190 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.445) 0:26:22.780 ******* 2026-03-18 05:10:10.698198 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698207 | orchestrator | 2026-03-18 05:10:10.698215 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:10:10.698224 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.156) 0:26:22.937 ******* 2026-03-18 05:10:10.698230 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698238 | orchestrator | 2026-03-18 05:10:10.698246 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:10:10.698254 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.145) 0:26:23.082 ******* 2026-03-18 05:10:10.698282 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698290 | orchestrator | 2026-03-18 05:10:10.698298 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:10:10.698305 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.146) 0:26:23.229 ******* 2026-03-18 05:10:10.698312 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698320 | orchestrator | 2026-03-18 05:10:10.698329 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:10:10.698338 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.139) 0:26:23.368 ******* 2026-03-18 05:10:10.698358 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698411 | orchestrator | 2026-03-18 05:10:10.698420 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:10:10.698428 | orchestrator | Wednesday 18 March 2026 05:09:51 +0000 (0:00:00.145) 0:26:23.513 ******* 2026-03-18 05:10:10.698436 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698444 | orchestrator | 2026-03-18 05:10:10.698452 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:10:10.698461 | orchestrator | Wednesday 18 March 2026 05:09:52 +0000 (0:00:00.143) 0:26:23.657 ******* 2026-03-18 05:10:10.698468 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698476 | orchestrator | 2026-03-18 05:10:10.698484 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:10:10.698492 | orchestrator | Wednesday 18 March 2026 05:09:52 +0000 (0:00:00.153) 0:26:23.811 ******* 2026-03-18 05:10:10.698500 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698508 | orchestrator | 2026-03-18 05:10:10.698516 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:10:10.698525 | orchestrator | Wednesday 18 March 2026 05:09:52 +0000 (0:00:00.149) 0:26:23.960 ******* 2026-03-18 05:10:10.698532 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698541 | orchestrator | 2026-03-18 05:10:10.698548 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:10:10.698556 | orchestrator | Wednesday 18 March 2026 05:09:52 +0000 (0:00:00.183) 0:26:24.144 ******* 2026-03-18 05:10:10.698564 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698573 | orchestrator | 2026-03-18 05:10:10.698581 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:10:10.698590 | orchestrator | Wednesday 18 March 2026 05:09:52 +0000 (0:00:00.153) 0:26:24.298 ******* 2026-03-18 05:10:10.698598 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:10:10.698606 | orchestrator | 2026-03-18 05:10:10.698614 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:10:10.698622 | orchestrator | Wednesday 18 March 2026 05:09:56 +0000 (0:00:03.465) 0:26:27.763 ******* 2026-03-18 05:10:10.698630 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:10:10.698640 | orchestrator | 2026-03-18 05:10:10.698649 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:10:10.698658 | orchestrator | Wednesday 18 March 2026 05:09:56 +0000 (0:00:00.189) 0:26:27.952 ******* 2026-03-18 05:10:10.698669 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-18 05:10:10.698681 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-18 05:10:10.698697 | orchestrator | 2026-03-18 05:10:10.698706 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:10:10.698714 | orchestrator | Wednesday 18 March 2026 05:10:00 +0000 (0:00:04.487) 0:26:32.440 ******* 2026-03-18 05:10:10.698722 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698730 | orchestrator | 2026-03-18 05:10:10.698738 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:10:10.698746 | orchestrator | Wednesday 18 March 2026 05:10:00 +0000 (0:00:00.151) 0:26:32.591 ******* 2026-03-18 05:10:10.698755 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698763 | orchestrator | 2026-03-18 05:10:10.698771 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:10:10.698795 | orchestrator | Wednesday 18 March 2026 05:10:01 +0000 (0:00:00.131) 0:26:32.722 ******* 2026-03-18 05:10:10.698804 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698812 | orchestrator | 2026-03-18 05:10:10.698821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:10:10.698829 | orchestrator | Wednesday 18 March 2026 05:10:01 +0000 (0:00:00.165) 0:26:32.887 ******* 2026-03-18 05:10:10.698837 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698846 | orchestrator | 2026-03-18 05:10:10.698854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:10:10.698863 | orchestrator | Wednesday 18 March 2026 05:10:01 +0000 (0:00:00.174) 0:26:33.062 ******* 2026-03-18 05:10:10.698870 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698879 | orchestrator | 2026-03-18 05:10:10.698886 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:10:10.698895 | orchestrator | Wednesday 18 March 2026 05:10:01 +0000 (0:00:00.171) 0:26:33.233 ******* 2026-03-18 05:10:10.698903 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:10:10.698911 | orchestrator | 2026-03-18 05:10:10.698919 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:10:10.698928 | orchestrator | Wednesday 18 March 2026 05:10:01 +0000 (0:00:00.255) 0:26:33.489 ******* 2026-03-18 05:10:10.698936 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:10:10.698944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:10:10.698952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:10:10.698964 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.698973 | orchestrator | 2026-03-18 05:10:10.698980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:10:10.698988 | orchestrator | Wednesday 18 March 2026 05:10:02 +0000 (0:00:00.459) 0:26:33.948 ******* 2026-03-18 05:10:10.698995 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:10:10.699003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:10:10.699010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:10:10.699019 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.699027 | orchestrator | 2026-03-18 05:10:10.699035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:10:10.699043 | orchestrator | Wednesday 18 March 2026 05:10:02 +0000 (0:00:00.503) 0:26:34.452 ******* 2026-03-18 05:10:10.699051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-18 05:10:10.699058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-18 05:10:10.699066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-18 05:10:10.699074 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.699082 | orchestrator | 2026-03-18 05:10:10.699090 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:10:10.699098 | orchestrator | Wednesday 18 March 2026 05:10:03 +0000 (0:00:00.447) 0:26:34.899 ******* 2026-03-18 05:10:10.699107 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:10:10.699120 | orchestrator | 2026-03-18 05:10:10.699128 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:10:10.699135 | orchestrator | Wednesday 18 March 2026 05:10:03 +0000 (0:00:00.182) 0:26:35.081 ******* 2026-03-18 05:10:10.699143 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-18 05:10:10.699151 | orchestrator | 2026-03-18 05:10:10.699159 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:10:10.699167 | orchestrator | Wednesday 18 March 2026 05:10:03 +0000 (0:00:00.481) 0:26:35.563 ******* 2026-03-18 05:10:10.699175 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:10:10.699183 | orchestrator | 2026-03-18 05:10:10.699191 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-18 05:10:10.699200 | orchestrator | Wednesday 18 March 2026 05:10:05 +0000 (0:00:01.648) 0:26:37.212 ******* 2026-03-18 05:10:10.699207 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-18 05:10:10.699215 | orchestrator | 2026-03-18 05:10:10.699223 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:10:10.699231 | orchestrator | Wednesday 18 March 2026 05:10:06 +0000 (0:00:00.561) 0:26:37.774 ******* 2026-03-18 05:10:10.699238 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:10:10.699247 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 05:10:10.699254 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:10:10.699262 | orchestrator | 2026-03-18 05:10:10.699271 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:10:10.699279 | orchestrator | Wednesday 18 March 2026 05:10:08 +0000 (0:00:02.287) 0:26:40.062 ******* 2026-03-18 05:10:10.699287 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-18 05:10:10.699295 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-18 05:10:10.699302 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:10:10.699310 | orchestrator | 2026-03-18 05:10:10.699318 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-18 05:10:10.699327 | orchestrator | Wednesday 18 March 2026 05:10:09 +0000 (0:00:00.929) 0:26:40.992 ******* 2026-03-18 05:10:10.699335 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:10:10.699343 | orchestrator | 2026-03-18 05:10:10.699351 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-18 05:10:10.699359 | orchestrator | Wednesday 18 March 2026 05:10:09 +0000 (0:00:00.133) 0:26:41.125 ******* 2026-03-18 05:10:10.699385 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-18 05:10:10.699394 | orchestrator | 2026-03-18 05:10:10.699401 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-18 05:10:10.699409 | orchestrator | Wednesday 18 March 2026 05:10:10 +0000 (0:00:00.575) 0:26:41.700 ******* 2026-03-18 05:10:10.699422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:03.321966 | orchestrator | 2026-03-18 05:11:03.322145 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-18 05:11:03.322164 | orchestrator | Wednesday 18 March 2026 05:10:10 +0000 (0:00:00.603) 0:26:42.303 ******* 2026-03-18 05:11:03.322176 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:11:03.322189 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 05:11:03.322201 | orchestrator | 2026-03-18 05:11:03.322212 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:11:03.322224 | orchestrator | Wednesday 18 March 2026 05:10:14 +0000 (0:00:04.283) 0:26:46.587 ******* 2026-03-18 05:11:03.322235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:11:03.322247 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:11:03.322283 | orchestrator | 2026-03-18 05:11:03.322294 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:11:03.322305 | orchestrator | Wednesday 18 March 2026 05:10:17 +0000 (0:00:02.208) 0:26:48.796 ******* 2026-03-18 05:11:03.322315 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-18 05:11:03.322327 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:11:03.322339 | orchestrator | 2026-03-18 05:11:03.322364 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-18 05:11:03.322376 | orchestrator | Wednesday 18 March 2026 05:10:18 +0000 (0:00:01.011) 0:26:49.807 ******* 2026-03-18 05:11:03.322386 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-18 05:11:03.322397 | orchestrator | 2026-03-18 05:11:03.322437 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-18 05:11:03.322448 | orchestrator | Wednesday 18 March 2026 05:10:19 +0000 (0:00:00.985) 0:26:50.793 ******* 2026-03-18 05:11:03.322459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322513 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:11:03.322524 | orchestrator | 2026-03-18 05:11:03.322535 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-18 05:11:03.322545 | orchestrator | Wednesday 18 March 2026 05:10:19 +0000 (0:00:00.646) 0:26:51.440 ******* 2026-03-18 05:11:03.322557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:11:03.322612 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:11:03.322622 | orchestrator | 2026-03-18 05:11:03.322633 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-18 05:11:03.322644 | orchestrator | Wednesday 18 March 2026 05:10:20 +0000 (0:00:00.643) 0:26:52.083 ******* 2026-03-18 05:11:03.322655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:11:03.322667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:11:03.322677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:11:03.322688 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:11:03.322711 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:11:03.322722 | orchestrator | 2026-03-18 05:11:03.322733 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-18 05:11:03.322761 | orchestrator | Wednesday 18 March 2026 05:10:51 +0000 (0:00:31.138) 0:27:23.221 ******* 2026-03-18 05:11:03.322773 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:11:03.322783 | orchestrator | 2026-03-18 05:11:03.322794 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-18 05:11:03.322805 | orchestrator | Wednesday 18 March 2026 05:10:51 +0000 (0:00:00.129) 0:27:23.351 ******* 2026-03-18 05:11:03.322815 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:11:03.322826 | orchestrator | 2026-03-18 05:11:03.322837 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-18 05:11:03.322847 | orchestrator | Wednesday 18 March 2026 05:10:51 +0000 (0:00:00.155) 0:27:23.506 ******* 2026-03-18 05:11:03.322858 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-18 05:11:03.322868 | orchestrator | 2026-03-18 05:11:03.322879 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-18 05:11:03.322889 | orchestrator | Wednesday 18 March 2026 05:10:52 +0000 (0:00:00.599) 0:27:24.106 ******* 2026-03-18 05:11:03.322900 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-18 05:11:03.322910 | orchestrator | 2026-03-18 05:11:03.322921 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-18 05:11:03.322932 | orchestrator | Wednesday 18 March 2026 05:10:53 +0000 (0:00:00.622) 0:27:24.728 ******* 2026-03-18 05:11:03.322943 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:11:03.322953 | orchestrator | 2026-03-18 05:11:03.322969 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-18 05:11:03.322980 | orchestrator | Wednesday 18 March 2026 05:10:54 +0000 (0:00:01.029) 0:27:25.758 ******* 2026-03-18 05:11:03.322991 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:11:03.323001 | orchestrator | 2026-03-18 05:11:03.323012 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-18 05:11:03.323023 | orchestrator | Wednesday 18 March 2026 05:10:55 +0000 (0:00:00.982) 0:27:26.740 ******* 2026-03-18 05:11:03.323033 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:11:03.323044 | orchestrator | 2026-03-18 05:11:03.323054 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-18 05:11:03.323065 | orchestrator | Wednesday 18 March 2026 05:10:57 +0000 (0:00:02.561) 0:27:29.302 ******* 2026-03-18 05:11:03.323075 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:03.323086 | orchestrator | 2026-03-18 05:11:03.323097 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-18 05:11:03.323107 | orchestrator | 2026-03-18 05:11:03.323118 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:11:03.323128 | orchestrator | Wednesday 18 March 2026 05:11:00 +0000 (0:00:02.400) 0:27:31.702 ******* 2026-03-18 05:11:03.323139 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-18 05:11:03.323150 | orchestrator | 2026-03-18 05:11:03.323161 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:11:03.323171 | orchestrator | Wednesday 18 March 2026 05:11:00 +0000 (0:00:00.253) 0:27:31.956 ******* 2026-03-18 05:11:03.323182 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323193 | orchestrator | 2026-03-18 05:11:03.323203 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:11:03.323214 | orchestrator | Wednesday 18 March 2026 05:11:00 +0000 (0:00:00.504) 0:27:32.461 ******* 2026-03-18 05:11:03.323224 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323235 | orchestrator | 2026-03-18 05:11:03.323246 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:11:03.323262 | orchestrator | Wednesday 18 March 2026 05:11:00 +0000 (0:00:00.142) 0:27:32.603 ******* 2026-03-18 05:11:03.323273 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323284 | orchestrator | 2026-03-18 05:11:03.323294 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:11:03.323305 | orchestrator | Wednesday 18 March 2026 05:11:01 +0000 (0:00:00.496) 0:27:33.100 ******* 2026-03-18 05:11:03.323315 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323326 | orchestrator | 2026-03-18 05:11:03.323336 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:11:03.323347 | orchestrator | Wednesday 18 March 2026 05:11:01 +0000 (0:00:00.151) 0:27:33.251 ******* 2026-03-18 05:11:03.323357 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323368 | orchestrator | 2026-03-18 05:11:03.323378 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:11:03.323389 | orchestrator | Wednesday 18 March 2026 05:11:01 +0000 (0:00:00.164) 0:27:33.416 ******* 2026-03-18 05:11:03.323414 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323426 | orchestrator | 2026-03-18 05:11:03.323437 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:11:03.323448 | orchestrator | Wednesday 18 March 2026 05:11:01 +0000 (0:00:00.162) 0:27:33.578 ******* 2026-03-18 05:11:03.323532 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:03.323545 | orchestrator | 2026-03-18 05:11:03.323556 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:11:03.323567 | orchestrator | Wednesday 18 March 2026 05:11:02 +0000 (0:00:00.154) 0:27:33.733 ******* 2026-03-18 05:11:03.323577 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:03.323588 | orchestrator | 2026-03-18 05:11:03.323599 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:11:03.323611 | orchestrator | Wednesday 18 March 2026 05:11:02 +0000 (0:00:00.463) 0:27:34.196 ******* 2026-03-18 05:11:03.323621 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:11:03.323632 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:11:03.323643 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:11:03.323654 | orchestrator | 2026-03-18 05:11:03.323665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:11:03.323684 | orchestrator | Wednesday 18 March 2026 05:11:03 +0000 (0:00:00.724) 0:27:34.920 ******* 2026-03-18 05:11:10.807389 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.807524 | orchestrator | 2026-03-18 05:11:10.807534 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:11:10.807541 | orchestrator | Wednesday 18 March 2026 05:11:03 +0000 (0:00:00.274) 0:27:35.195 ******* 2026-03-18 05:11:10.807547 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:11:10.807553 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:11:10.807559 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:11:10.807564 | orchestrator | 2026-03-18 05:11:10.807570 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:11:10.807575 | orchestrator | Wednesday 18 March 2026 05:11:05 +0000 (0:00:01.889) 0:27:37.084 ******* 2026-03-18 05:11:10.807581 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:11:10.807587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:11:10.807592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:11:10.807598 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807603 | orchestrator | 2026-03-18 05:11:10.807608 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:11:10.807641 | orchestrator | Wednesday 18 March 2026 05:11:05 +0000 (0:00:00.423) 0:27:37.508 ******* 2026-03-18 05:11:10.807648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807656 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807661 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807666 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807671 | orchestrator | 2026-03-18 05:11:10.807677 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:11:10.807682 | orchestrator | Wednesday 18 March 2026 05:11:06 +0000 (0:00:00.657) 0:27:38.165 ******* 2026-03-18 05:11:10.807688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:10.807707 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807712 | orchestrator | 2026-03-18 05:11:10.807717 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:11:10.807722 | orchestrator | Wednesday 18 March 2026 05:11:06 +0000 (0:00:00.186) 0:27:38.352 ******* 2026-03-18 05:11:10.807741 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:11:04.119981', 'end': '2026-03-18 05:11:04.168615', 'delta': '0:00:00.048634', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:11:10.807749 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:11:04.668315', 'end': '2026-03-18 05:11:04.718412', 'delta': '0:00:00.050097', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:11:10.807772 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:11:05.256873', 'end': '2026-03-18 05:11:05.309451', 'delta': '0:00:00.052578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:11:10.807778 | orchestrator | 2026-03-18 05:11:10.807783 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:11:10.807788 | orchestrator | Wednesday 18 March 2026 05:11:06 +0000 (0:00:00.196) 0:27:38.549 ******* 2026-03-18 05:11:10.807793 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.807798 | orchestrator | 2026-03-18 05:11:10.807803 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:11:10.807808 | orchestrator | Wednesday 18 March 2026 05:11:07 +0000 (0:00:00.278) 0:27:38.827 ******* 2026-03-18 05:11:10.807813 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807818 | orchestrator | 2026-03-18 05:11:10.807824 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:11:10.807829 | orchestrator | Wednesday 18 March 2026 05:11:07 +0000 (0:00:00.348) 0:27:39.176 ******* 2026-03-18 05:11:10.807834 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.807839 | orchestrator | 2026-03-18 05:11:10.807844 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:11:10.807849 | orchestrator | Wednesday 18 March 2026 05:11:07 +0000 (0:00:00.162) 0:27:39.338 ******* 2026-03-18 05:11:10.807853 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:11:10.807859 | orchestrator | 2026-03-18 05:11:10.807864 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:11:10.807869 | orchestrator | Wednesday 18 March 2026 05:11:08 +0000 (0:00:01.002) 0:27:40.341 ******* 2026-03-18 05:11:10.807874 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.807879 | orchestrator | 2026-03-18 05:11:10.807884 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:11:10.807889 | orchestrator | Wednesday 18 March 2026 05:11:08 +0000 (0:00:00.175) 0:27:40.517 ******* 2026-03-18 05:11:10.807893 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807899 | orchestrator | 2026-03-18 05:11:10.807903 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:11:10.807908 | orchestrator | Wednesday 18 March 2026 05:11:09 +0000 (0:00:00.133) 0:27:40.651 ******* 2026-03-18 05:11:10.807913 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807918 | orchestrator | 2026-03-18 05:11:10.807923 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:11:10.807928 | orchestrator | Wednesday 18 March 2026 05:11:09 +0000 (0:00:00.954) 0:27:41.605 ******* 2026-03-18 05:11:10.807933 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807938 | orchestrator | 2026-03-18 05:11:10.807943 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:11:10.807949 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.136) 0:27:41.742 ******* 2026-03-18 05:11:10.807955 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.807961 | orchestrator | 2026-03-18 05:11:10.807967 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:11:10.807976 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.153) 0:27:41.895 ******* 2026-03-18 05:11:10.807982 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.807988 | orchestrator | 2026-03-18 05:11:10.807994 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:11:10.808000 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.197) 0:27:42.093 ******* 2026-03-18 05:11:10.808006 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:10.808012 | orchestrator | 2026-03-18 05:11:10.808018 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:11:10.808023 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.139) 0:27:42.232 ******* 2026-03-18 05:11:10.808029 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:10.808035 | orchestrator | 2026-03-18 05:11:10.808040 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:11:10.808050 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.184) 0:27:42.416 ******* 2026-03-18 05:11:11.386073 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:11.386176 | orchestrator | 2026-03-18 05:11:11.386193 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:11:11.386205 | orchestrator | Wednesday 18 March 2026 05:11:10 +0000 (0:00:00.153) 0:27:42.570 ******* 2026-03-18 05:11:11.386218 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:11.386230 | orchestrator | 2026-03-18 05:11:11.386241 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:11:11.386252 | orchestrator | Wednesday 18 March 2026 05:11:11 +0000 (0:00:00.206) 0:27:42.776 ******* 2026-03-18 05:11:11.386266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}})  2026-03-18 05:11:11.386315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:11:11.386328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}})  2026-03-18 05:11:11.386363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:11:11.386466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}})  2026-03-18 05:11:11.386528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}})  2026-03-18 05:11:11.386539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.386572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:11:11.722780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.722905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:11:11.722964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:11:11.722990 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:11.723010 | orchestrator | 2026-03-18 05:11:11.723031 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:11:11.723051 | orchestrator | Wednesday 18 March 2026 05:11:11 +0000 (0:00:00.343) 0:27:43.120 ******* 2026-03-18 05:11:11.723073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723191 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d', 'dm-uuid-LVM-1nghto8FjlgOMGE0qJuNE35bcFGeakm7FeqYn9N8yM2I7mHfmTh3UyYEE55mFAWL'], 'uuids': ['983d6df2-25ad-44ac-a3c4-ba9acd83e203'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723237 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a', 'scsi-SQEMU_QEMU_HARDDISK_9cbe8edb-19a8-4e8f-bbd2-89a6e80c8d6a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9cbe8edb', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jnV2yd-YS7R-Vqep-tcrP-VJxp-okiM-Yb1ELG', 'scsi-0QEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc', 'scsi-SQEMU_QEMU_HARDDISK_80734d97-478b-4a5e-879f-889cd258efbc'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723334 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:11.723452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M', 'dm-uuid-CRYPT-LUKS2-61b3b30ad50c493e85c9b4a1f26e6c13-31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d0e002fd--9a73--564c--a03c--ee3a79d477af-osd--block--d0e002fd--9a73--564c--a03c--ee3a79d477af', 'dm-uuid-LVM-r2QSpox5L5YvZxbLW2ofZmnL2yRyHAcb31gjpKAQuj1V0dzEH4DggGep9onP7U5M'], 'uuids': ['61b3b30a-d50c-493e-85c9-b4a1f26e6c13'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '80734d97', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['31gjpK-AQuj-1V0d-zEH4-DggG-ep9o-nP7U5M']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071693 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-gdKyfy-wnzk-0StP-QaSt-irpk-iROA-l0CD4I', 'scsi-0QEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a', 'scsi-SQEMU_QEMU_HARDDISK_f4bc8da1-65d0-4f2a-8066-3fa706e86a6a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4bc8da1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--ab16e1e8--130f--595d--96ba--aeefaeb1133d-osd--block--ab16e1e8--130f--595d--96ba--aeefaeb1133d']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071758 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '248efa21', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_248efa21-e866-4de7-b593-1e4360051f6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL', 'dm-uuid-CRYPT-LUKS2-983d6df225ad44aca3c4ba9acd83e203-FeqYn9-N8yM-2I7m-HfmT-h3Uy-YEE5-5mFAWL'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:11:13.071826 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:13.071840 | orchestrator | 2026-03-18 05:11:13.071852 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:11:13.071878 | orchestrator | Wednesday 18 March 2026 05:11:11 +0000 (0:00:00.396) 0:27:43.517 ******* 2026-03-18 05:11:13.071890 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:13.071901 | orchestrator | 2026-03-18 05:11:13.071913 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:11:13.071923 | orchestrator | Wednesday 18 March 2026 05:11:12 +0000 (0:00:00.518) 0:27:44.035 ******* 2026-03-18 05:11:13.071942 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:13.071952 | orchestrator | 2026-03-18 05:11:13.071963 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:11:13.071974 | orchestrator | Wednesday 18 March 2026 05:11:12 +0000 (0:00:00.157) 0:27:44.193 ******* 2026-03-18 05:11:13.071985 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:13.071996 | orchestrator | 2026-03-18 05:11:13.072007 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:11:13.072025 | orchestrator | Wednesday 18 March 2026 05:11:13 +0000 (0:00:00.485) 0:27:44.679 ******* 2026-03-18 05:11:29.160380 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.160574 | orchestrator | 2026-03-18 05:11:29.160593 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:11:29.160605 | orchestrator | Wednesday 18 March 2026 05:11:13 +0000 (0:00:00.467) 0:27:45.147 ******* 2026-03-18 05:11:29.160617 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.160628 | orchestrator | 2026-03-18 05:11:29.160640 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:11:29.160651 | orchestrator | Wednesday 18 March 2026 05:11:13 +0000 (0:00:00.277) 0:27:45.425 ******* 2026-03-18 05:11:29.160662 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.160673 | orchestrator | 2026-03-18 05:11:29.160684 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:11:29.160695 | orchestrator | Wednesday 18 March 2026 05:11:13 +0000 (0:00:00.166) 0:27:45.591 ******* 2026-03-18 05:11:29.160707 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-18 05:11:29.160719 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-18 05:11:29.160730 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-18 05:11:29.160741 | orchestrator | 2026-03-18 05:11:29.160752 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:11:29.160762 | orchestrator | Wednesday 18 March 2026 05:11:14 +0000 (0:00:00.732) 0:27:46.324 ******* 2026-03-18 05:11:29.160773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-18 05:11:29.160784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-18 05:11:29.160795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-18 05:11:29.160812 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.160831 | orchestrator | 2026-03-18 05:11:29.160850 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:11:29.160870 | orchestrator | Wednesday 18 March 2026 05:11:14 +0000 (0:00:00.158) 0:27:46.482 ******* 2026-03-18 05:11:29.160889 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-18 05:11:29.160908 | orchestrator | 2026-03-18 05:11:29.160928 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:11:29.160950 | orchestrator | Wednesday 18 March 2026 05:11:15 +0000 (0:00:00.255) 0:27:46.738 ******* 2026-03-18 05:11:29.160971 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.160990 | orchestrator | 2026-03-18 05:11:29.161007 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:11:29.161027 | orchestrator | Wednesday 18 March 2026 05:11:15 +0000 (0:00:00.169) 0:27:46.907 ******* 2026-03-18 05:11:29.161047 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.161064 | orchestrator | 2026-03-18 05:11:29.161081 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:11:29.161101 | orchestrator | Wednesday 18 March 2026 05:11:15 +0000 (0:00:00.149) 0:27:47.056 ******* 2026-03-18 05:11:29.161119 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.161137 | orchestrator | 2026-03-18 05:11:29.161156 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:11:29.161176 | orchestrator | Wednesday 18 March 2026 05:11:15 +0000 (0:00:00.148) 0:27:47.205 ******* 2026-03-18 05:11:29.161228 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:29.161240 | orchestrator | 2026-03-18 05:11:29.161251 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:11:29.161262 | orchestrator | Wednesday 18 March 2026 05:11:15 +0000 (0:00:00.245) 0:27:47.451 ******* 2026-03-18 05:11:29.161273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:11:29.161283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:11:29.161294 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:11:29.161304 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.161315 | orchestrator | 2026-03-18 05:11:29.161326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:11:29.161336 | orchestrator | Wednesday 18 March 2026 05:11:16 +0000 (0:00:00.803) 0:27:48.254 ******* 2026-03-18 05:11:29.161347 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:11:29.161358 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:11:29.161368 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:11:29.161379 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.161389 | orchestrator | 2026-03-18 05:11:29.161416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:11:29.161459 | orchestrator | Wednesday 18 March 2026 05:11:17 +0000 (0:00:00.791) 0:27:49.045 ******* 2026-03-18 05:11:29.161470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:11:29.161481 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:11:29.161491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:11:29.161502 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.161513 | orchestrator | 2026-03-18 05:11:29.161524 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:11:29.161535 | orchestrator | Wednesday 18 March 2026 05:11:18 +0000 (0:00:01.116) 0:27:50.162 ******* 2026-03-18 05:11:29.161546 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:29.161557 | orchestrator | 2026-03-18 05:11:29.161567 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:11:29.161578 | orchestrator | Wednesday 18 March 2026 05:11:18 +0000 (0:00:00.196) 0:27:50.359 ******* 2026-03-18 05:11:29.161589 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:11:29.161603 | orchestrator | 2026-03-18 05:11:29.161623 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:11:29.161642 | orchestrator | Wednesday 18 March 2026 05:11:19 +0000 (0:00:00.430) 0:27:50.790 ******* 2026-03-18 05:11:29.161684 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:11:29.161705 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:11:29.161724 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:11:29.161740 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:11:29.161759 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:11:29.161779 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:11:29.161798 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:11:29.161817 | orchestrator | 2026-03-18 05:11:29.161835 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:11:29.161855 | orchestrator | Wednesday 18 March 2026 05:11:20 +0000 (0:00:00.892) 0:27:51.682 ******* 2026-03-18 05:11:29.161870 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:11:29.161889 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:11:29.161907 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:11:29.161942 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:11:29.161961 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-18 05:11:29.161977 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-18 05:11:29.161994 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:11:29.162086 | orchestrator | 2026-03-18 05:11:29.162103 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-18 05:11:29.162114 | orchestrator | Wednesday 18 March 2026 05:11:21 +0000 (0:00:01.768) 0:27:53.451 ******* 2026-03-18 05:11:29.162130 | orchestrator | changed: [testbed-node-4] 2026-03-18 05:11:29.162149 | orchestrator | 2026-03-18 05:11:29.162167 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-18 05:11:29.162185 | orchestrator | Wednesday 18 March 2026 05:11:23 +0000 (0:00:01.230) 0:27:54.682 ******* 2026-03-18 05:11:29.162203 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:29.162221 | orchestrator | 2026-03-18 05:11:29.162240 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-18 05:11:29.162259 | orchestrator | Wednesday 18 March 2026 05:11:24 +0000 (0:00:01.850) 0:27:56.532 ******* 2026-03-18 05:11:29.162278 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:29.162297 | orchestrator | 2026-03-18 05:11:29.162314 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:11:29.162331 | orchestrator | Wednesday 18 March 2026 05:11:26 +0000 (0:00:01.303) 0:27:57.835 ******* 2026-03-18 05:11:29.162343 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-18 05:11:29.162354 | orchestrator | 2026-03-18 05:11:29.162365 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:11:29.162376 | orchestrator | Wednesday 18 March 2026 05:11:26 +0000 (0:00:00.227) 0:27:58.063 ******* 2026-03-18 05:11:29.162386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-18 05:11:29.162397 | orchestrator | 2026-03-18 05:11:29.162408 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:11:29.162418 | orchestrator | Wednesday 18 March 2026 05:11:26 +0000 (0:00:00.218) 0:27:58.281 ******* 2026-03-18 05:11:29.162460 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.162471 | orchestrator | 2026-03-18 05:11:29.162482 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:11:29.162492 | orchestrator | Wednesday 18 March 2026 05:11:27 +0000 (0:00:00.452) 0:27:58.734 ******* 2026-03-18 05:11:29.162503 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:29.162514 | orchestrator | 2026-03-18 05:11:29.162525 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:11:29.162544 | orchestrator | Wednesday 18 March 2026 05:11:27 +0000 (0:00:00.505) 0:27:59.240 ******* 2026-03-18 05:11:29.162555 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:29.162566 | orchestrator | 2026-03-18 05:11:29.162576 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:11:29.162587 | orchestrator | Wednesday 18 March 2026 05:11:28 +0000 (0:00:00.529) 0:27:59.769 ******* 2026-03-18 05:11:29.162598 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:29.162609 | orchestrator | 2026-03-18 05:11:29.162620 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:11:29.162630 | orchestrator | Wednesday 18 March 2026 05:11:28 +0000 (0:00:00.541) 0:28:00.311 ******* 2026-03-18 05:11:29.162641 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.162652 | orchestrator | 2026-03-18 05:11:29.162663 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:11:29.162683 | orchestrator | Wednesday 18 March 2026 05:11:28 +0000 (0:00:00.138) 0:28:00.450 ******* 2026-03-18 05:11:29.162693 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.162704 | orchestrator | 2026-03-18 05:11:29.162715 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:11:29.162725 | orchestrator | Wednesday 18 March 2026 05:11:28 +0000 (0:00:00.159) 0:28:00.610 ******* 2026-03-18 05:11:29.162736 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:29.162747 | orchestrator | 2026-03-18 05:11:29.162758 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:11:29.162781 | orchestrator | Wednesday 18 March 2026 05:11:29 +0000 (0:00:00.152) 0:28:00.762 ******* 2026-03-18 05:11:41.607346 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607528 | orchestrator | 2026-03-18 05:11:41.607546 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:11:41.607559 | orchestrator | Wednesday 18 March 2026 05:11:29 +0000 (0:00:00.541) 0:28:01.303 ******* 2026-03-18 05:11:41.607570 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607581 | orchestrator | 2026-03-18 05:11:41.607593 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:11:41.607604 | orchestrator | Wednesday 18 March 2026 05:11:30 +0000 (0:00:00.516) 0:28:01.819 ******* 2026-03-18 05:11:41.607615 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.607627 | orchestrator | 2026-03-18 05:11:41.607638 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:11:41.607649 | orchestrator | Wednesday 18 March 2026 05:11:30 +0000 (0:00:00.131) 0:28:01.951 ******* 2026-03-18 05:11:41.607660 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.607670 | orchestrator | 2026-03-18 05:11:41.607681 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:11:41.607692 | orchestrator | Wednesday 18 March 2026 05:11:30 +0000 (0:00:00.180) 0:28:02.131 ******* 2026-03-18 05:11:41.607703 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607713 | orchestrator | 2026-03-18 05:11:41.607724 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:11:41.607735 | orchestrator | Wednesday 18 March 2026 05:11:30 +0000 (0:00:00.162) 0:28:02.293 ******* 2026-03-18 05:11:41.607746 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607757 | orchestrator | 2026-03-18 05:11:41.607767 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:11:41.607778 | orchestrator | Wednesday 18 March 2026 05:11:30 +0000 (0:00:00.172) 0:28:02.465 ******* 2026-03-18 05:11:41.607789 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607800 | orchestrator | 2026-03-18 05:11:41.607810 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:11:41.607821 | orchestrator | Wednesday 18 March 2026 05:11:31 +0000 (0:00:00.500) 0:28:02.966 ******* 2026-03-18 05:11:41.607832 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.607842 | orchestrator | 2026-03-18 05:11:41.607853 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:11:41.607864 | orchestrator | Wednesday 18 March 2026 05:11:31 +0000 (0:00:00.143) 0:28:03.110 ******* 2026-03-18 05:11:41.607874 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.607887 | orchestrator | 2026-03-18 05:11:41.607900 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:11:41.607912 | orchestrator | Wednesday 18 March 2026 05:11:31 +0000 (0:00:00.142) 0:28:03.252 ******* 2026-03-18 05:11:41.607924 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.607936 | orchestrator | 2026-03-18 05:11:41.607949 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:11:41.607961 | orchestrator | Wednesday 18 March 2026 05:11:31 +0000 (0:00:00.133) 0:28:03.385 ******* 2026-03-18 05:11:41.607973 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.607985 | orchestrator | 2026-03-18 05:11:41.607997 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:11:41.608033 | orchestrator | Wednesday 18 March 2026 05:11:31 +0000 (0:00:00.157) 0:28:03.543 ******* 2026-03-18 05:11:41.608046 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.608059 | orchestrator | 2026-03-18 05:11:41.608071 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:11:41.608083 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.229) 0:28:03.772 ******* 2026-03-18 05:11:41.608094 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608104 | orchestrator | 2026-03-18 05:11:41.608115 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:11:41.608126 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.159) 0:28:03.932 ******* 2026-03-18 05:11:41.608136 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608147 | orchestrator | 2026-03-18 05:11:41.608157 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:11:41.608168 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.141) 0:28:04.074 ******* 2026-03-18 05:11:41.608178 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608189 | orchestrator | 2026-03-18 05:11:41.608199 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:11:41.608210 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.129) 0:28:04.203 ******* 2026-03-18 05:11:41.608221 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608231 | orchestrator | 2026-03-18 05:11:41.608257 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:11:41.608269 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.136) 0:28:04.339 ******* 2026-03-18 05:11:41.608280 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608290 | orchestrator | 2026-03-18 05:11:41.608301 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:11:41.608312 | orchestrator | Wednesday 18 March 2026 05:11:32 +0000 (0:00:00.170) 0:28:04.510 ******* 2026-03-18 05:11:41.608322 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608333 | orchestrator | 2026-03-18 05:11:41.608344 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:11:41.608354 | orchestrator | Wednesday 18 March 2026 05:11:33 +0000 (0:00:00.162) 0:28:04.672 ******* 2026-03-18 05:11:41.608365 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608376 | orchestrator | 2026-03-18 05:11:41.608386 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:11:41.608397 | orchestrator | Wednesday 18 March 2026 05:11:33 +0000 (0:00:00.474) 0:28:05.147 ******* 2026-03-18 05:11:41.608420 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608451 | orchestrator | 2026-03-18 05:11:41.608462 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:11:41.608473 | orchestrator | Wednesday 18 March 2026 05:11:33 +0000 (0:00:00.136) 0:28:05.283 ******* 2026-03-18 05:11:41.608484 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608494 | orchestrator | 2026-03-18 05:11:41.608520 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:11:41.608532 | orchestrator | Wednesday 18 March 2026 05:11:33 +0000 (0:00:00.147) 0:28:05.431 ******* 2026-03-18 05:11:41.608543 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608554 | orchestrator | 2026-03-18 05:11:41.608564 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:11:41.608575 | orchestrator | Wednesday 18 March 2026 05:11:33 +0000 (0:00:00.129) 0:28:05.561 ******* 2026-03-18 05:11:41.608586 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608596 | orchestrator | 2026-03-18 05:11:41.608607 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:11:41.608618 | orchestrator | Wednesday 18 March 2026 05:11:34 +0000 (0:00:00.153) 0:28:05.714 ******* 2026-03-18 05:11:41.608628 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608639 | orchestrator | 2026-03-18 05:11:41.608650 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:11:41.608670 | orchestrator | Wednesday 18 March 2026 05:11:34 +0000 (0:00:00.253) 0:28:05.968 ******* 2026-03-18 05:11:41.608681 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.608691 | orchestrator | 2026-03-18 05:11:41.608702 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:11:41.608713 | orchestrator | Wednesday 18 March 2026 05:11:35 +0000 (0:00:00.957) 0:28:06.925 ******* 2026-03-18 05:11:41.608723 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.608734 | orchestrator | 2026-03-18 05:11:41.608745 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:11:41.608755 | orchestrator | Wednesday 18 March 2026 05:11:36 +0000 (0:00:01.237) 0:28:08.162 ******* 2026-03-18 05:11:41.608766 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-18 05:11:41.608777 | orchestrator | 2026-03-18 05:11:41.608788 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:11:41.608798 | orchestrator | Wednesday 18 March 2026 05:11:36 +0000 (0:00:00.236) 0:28:08.399 ******* 2026-03-18 05:11:41.608809 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608819 | orchestrator | 2026-03-18 05:11:41.608830 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:11:41.608841 | orchestrator | Wednesday 18 March 2026 05:11:36 +0000 (0:00:00.140) 0:28:08.539 ******* 2026-03-18 05:11:41.608851 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.608862 | orchestrator | 2026-03-18 05:11:41.608872 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:11:41.608883 | orchestrator | Wednesday 18 March 2026 05:11:37 +0000 (0:00:00.146) 0:28:08.686 ******* 2026-03-18 05:11:41.608894 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:11:41.608905 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:11:41.608915 | orchestrator | 2026-03-18 05:11:41.608926 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:11:41.608937 | orchestrator | Wednesday 18 March 2026 05:11:37 +0000 (0:00:00.804) 0:28:09.490 ******* 2026-03-18 05:11:41.608948 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.608958 | orchestrator | 2026-03-18 05:11:41.608969 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:11:41.608980 | orchestrator | Wednesday 18 March 2026 05:11:38 +0000 (0:00:00.760) 0:28:10.251 ******* 2026-03-18 05:11:41.608991 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609002 | orchestrator | 2026-03-18 05:11:41.609012 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:11:41.609023 | orchestrator | Wednesday 18 March 2026 05:11:38 +0000 (0:00:00.164) 0:28:10.415 ******* 2026-03-18 05:11:41.609033 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609044 | orchestrator | 2026-03-18 05:11:41.609055 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:11:41.609065 | orchestrator | Wednesday 18 March 2026 05:11:38 +0000 (0:00:00.167) 0:28:10.583 ******* 2026-03-18 05:11:41.609076 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609087 | orchestrator | 2026-03-18 05:11:41.609097 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:11:41.609108 | orchestrator | Wednesday 18 March 2026 05:11:39 +0000 (0:00:00.157) 0:28:10.740 ******* 2026-03-18 05:11:41.609118 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-18 05:11:41.609129 | orchestrator | 2026-03-18 05:11:41.609145 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:11:41.609156 | orchestrator | Wednesday 18 March 2026 05:11:39 +0000 (0:00:00.224) 0:28:10.964 ******* 2026-03-18 05:11:41.609167 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:41.609177 | orchestrator | 2026-03-18 05:11:41.609188 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:11:41.609206 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:01.764) 0:28:12.729 ******* 2026-03-18 05:11:41.609216 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:11:41.609227 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:11:41.609238 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:11:41.609249 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609259 | orchestrator | 2026-03-18 05:11:41.609270 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:11:41.609281 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:00.150) 0:28:12.879 ******* 2026-03-18 05:11:41.609291 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609302 | orchestrator | 2026-03-18 05:11:41.609313 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:11:41.609323 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:00.139) 0:28:13.018 ******* 2026-03-18 05:11:41.609334 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:41.609345 | orchestrator | 2026-03-18 05:11:41.609361 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:11:59.027938 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:00.194) 0:28:13.213 ******* 2026-03-18 05:11:59.028058 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028076 | orchestrator | 2026-03-18 05:11:59.028089 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:11:59.028100 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:00.140) 0:28:13.353 ******* 2026-03-18 05:11:59.028112 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028123 | orchestrator | 2026-03-18 05:11:59.028134 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:11:59.028145 | orchestrator | Wednesday 18 March 2026 05:11:41 +0000 (0:00:00.154) 0:28:13.507 ******* 2026-03-18 05:11:59.028157 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028167 | orchestrator | 2026-03-18 05:11:59.028178 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:11:59.028189 | orchestrator | Wednesday 18 March 2026 05:11:42 +0000 (0:00:00.161) 0:28:13.669 ******* 2026-03-18 05:11:59.028200 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:59.028212 | orchestrator | 2026-03-18 05:11:59.028224 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:11:59.028236 | orchestrator | Wednesday 18 March 2026 05:11:43 +0000 (0:00:01.845) 0:28:15.514 ******* 2026-03-18 05:11:59.028247 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:59.028257 | orchestrator | 2026-03-18 05:11:59.028268 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:11:59.028279 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.150) 0:28:15.664 ******* 2026-03-18 05:11:59.028290 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-18 05:11:59.028301 | orchestrator | 2026-03-18 05:11:59.028312 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:11:59.028322 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.231) 0:28:15.896 ******* 2026-03-18 05:11:59.028333 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028344 | orchestrator | 2026-03-18 05:11:59.028355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:11:59.028366 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.158) 0:28:16.054 ******* 2026-03-18 05:11:59.028376 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028387 | orchestrator | 2026-03-18 05:11:59.028398 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:11:59.028409 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.152) 0:28:16.207 ******* 2026-03-18 05:11:59.028420 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028495 | orchestrator | 2026-03-18 05:11:59.028516 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:11:59.028535 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.160) 0:28:16.367 ******* 2026-03-18 05:11:59.028553 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028570 | orchestrator | 2026-03-18 05:11:59.028588 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:11:59.028606 | orchestrator | Wednesday 18 March 2026 05:11:44 +0000 (0:00:00.169) 0:28:16.537 ******* 2026-03-18 05:11:59.028623 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028644 | orchestrator | 2026-03-18 05:11:59.028665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:11:59.028683 | orchestrator | Wednesday 18 March 2026 05:11:45 +0000 (0:00:00.156) 0:28:16.694 ******* 2026-03-18 05:11:59.028702 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028715 | orchestrator | 2026-03-18 05:11:59.028729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:11:59.028742 | orchestrator | Wednesday 18 March 2026 05:11:45 +0000 (0:00:00.140) 0:28:16.835 ******* 2026-03-18 05:11:59.028754 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028767 | orchestrator | 2026-03-18 05:11:59.028779 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:11:59.028791 | orchestrator | Wednesday 18 March 2026 05:11:45 +0000 (0:00:00.167) 0:28:17.003 ******* 2026-03-18 05:11:59.028804 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.028817 | orchestrator | 2026-03-18 05:11:59.028829 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:11:59.028839 | orchestrator | Wednesday 18 March 2026 05:11:45 +0000 (0:00:00.146) 0:28:17.149 ******* 2026-03-18 05:11:59.028850 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:11:59.028861 | orchestrator | 2026-03-18 05:11:59.028886 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:11:59.028898 | orchestrator | Wednesday 18 March 2026 05:11:46 +0000 (0:00:00.523) 0:28:17.673 ******* 2026-03-18 05:11:59.028909 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-18 05:11:59.028920 | orchestrator | 2026-03-18 05:11:59.028931 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:11:59.028942 | orchestrator | Wednesday 18 March 2026 05:11:46 +0000 (0:00:00.219) 0:28:17.892 ******* 2026-03-18 05:11:59.028953 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-18 05:11:59.028964 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-18 05:11:59.028974 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-18 05:11:59.028985 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-18 05:11:59.028996 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-18 05:11:59.029006 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-18 05:11:59.029017 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-18 05:11:59.029027 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:11:59.029038 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:11:59.029049 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:11:59.029060 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:11:59.029088 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:11:59.029100 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:11:59.029111 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:11:59.029122 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-18 05:11:59.029133 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-18 05:11:59.029144 | orchestrator | 2026-03-18 05:11:59.029154 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:11:59.029176 | orchestrator | Wednesday 18 March 2026 05:11:51 +0000 (0:00:05.485) 0:28:23.377 ******* 2026-03-18 05:11:59.029187 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-18 05:11:59.029198 | orchestrator | 2026-03-18 05:11:59.029209 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:11:59.029219 | orchestrator | Wednesday 18 March 2026 05:11:51 +0000 (0:00:00.215) 0:28:23.592 ******* 2026-03-18 05:11:59.029230 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:59.029242 | orchestrator | 2026-03-18 05:11:59.029253 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:11:59.029264 | orchestrator | Wednesday 18 March 2026 05:11:52 +0000 (0:00:00.533) 0:28:24.125 ******* 2026-03-18 05:11:59.029275 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:59.029286 | orchestrator | 2026-03-18 05:11:59.029296 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:11:59.029307 | orchestrator | Wednesday 18 March 2026 05:11:53 +0000 (0:00:00.978) 0:28:25.104 ******* 2026-03-18 05:11:59.029318 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029329 | orchestrator | 2026-03-18 05:11:59.029339 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:11:59.029350 | orchestrator | Wednesday 18 March 2026 05:11:53 +0000 (0:00:00.155) 0:28:25.260 ******* 2026-03-18 05:11:59.029361 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029372 | orchestrator | 2026-03-18 05:11:59.029382 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:11:59.029393 | orchestrator | Wednesday 18 March 2026 05:11:53 +0000 (0:00:00.148) 0:28:25.409 ******* 2026-03-18 05:11:59.029404 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029415 | orchestrator | 2026-03-18 05:11:59.029425 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:11:59.029436 | orchestrator | Wednesday 18 March 2026 05:11:53 +0000 (0:00:00.144) 0:28:25.553 ******* 2026-03-18 05:11:59.029487 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029500 | orchestrator | 2026-03-18 05:11:59.029511 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:11:59.029521 | orchestrator | Wednesday 18 March 2026 05:11:54 +0000 (0:00:00.140) 0:28:25.693 ******* 2026-03-18 05:11:59.029532 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029542 | orchestrator | 2026-03-18 05:11:59.029553 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:11:59.029564 | orchestrator | Wednesday 18 March 2026 05:11:54 +0000 (0:00:00.157) 0:28:25.851 ******* 2026-03-18 05:11:59.029575 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029585 | orchestrator | 2026-03-18 05:11:59.029596 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:11:59.029607 | orchestrator | Wednesday 18 March 2026 05:11:54 +0000 (0:00:00.434) 0:28:26.285 ******* 2026-03-18 05:11:59.029618 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029628 | orchestrator | 2026-03-18 05:11:59.029639 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:11:59.029650 | orchestrator | Wednesday 18 March 2026 05:11:54 +0000 (0:00:00.138) 0:28:26.424 ******* 2026-03-18 05:11:59.029661 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029671 | orchestrator | 2026-03-18 05:11:59.029682 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:11:59.029693 | orchestrator | Wednesday 18 March 2026 05:11:54 +0000 (0:00:00.161) 0:28:26.585 ******* 2026-03-18 05:11:59.029704 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029714 | orchestrator | 2026-03-18 05:11:59.029733 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:11:59.029744 | orchestrator | Wednesday 18 March 2026 05:11:55 +0000 (0:00:00.174) 0:28:26.759 ******* 2026-03-18 05:11:59.029755 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029766 | orchestrator | 2026-03-18 05:11:59.029776 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:11:59.029787 | orchestrator | Wednesday 18 March 2026 05:11:55 +0000 (0:00:00.164) 0:28:26.924 ******* 2026-03-18 05:11:59.029798 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:11:59.029809 | orchestrator | 2026-03-18 05:11:59.029819 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:11:59.029830 | orchestrator | Wednesday 18 March 2026 05:11:55 +0000 (0:00:00.174) 0:28:27.099 ******* 2026-03-18 05:11:59.029841 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:11:59.029852 | orchestrator | 2026-03-18 05:11:59.029863 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:11:59.029873 | orchestrator | Wednesday 18 March 2026 05:11:58 +0000 (0:00:03.317) 0:28:30.416 ******* 2026-03-18 05:11:59.029884 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:11:59.029895 | orchestrator | 2026-03-18 05:11:59.029914 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:12:21.038184 | orchestrator | Wednesday 18 March 2026 05:11:59 +0000 (0:00:00.216) 0:28:30.633 ******* 2026-03-18 05:12:21.038294 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-18 05:12:21.038310 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-18 05:12:21.038321 | orchestrator | 2026-03-18 05:12:21.038331 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:12:21.038340 | orchestrator | Wednesday 18 March 2026 05:12:02 +0000 (0:00:03.725) 0:28:34.358 ******* 2026-03-18 05:12:21.038349 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038360 | orchestrator | 2026-03-18 05:12:21.038369 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:12:21.038379 | orchestrator | Wednesday 18 March 2026 05:12:02 +0000 (0:00:00.149) 0:28:34.508 ******* 2026-03-18 05:12:21.038388 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038440 | orchestrator | 2026-03-18 05:12:21.038451 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:12:21.038462 | orchestrator | Wednesday 18 March 2026 05:12:03 +0000 (0:00:00.125) 0:28:34.633 ******* 2026-03-18 05:12:21.038471 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038480 | orchestrator | 2026-03-18 05:12:21.038489 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:12:21.038499 | orchestrator | Wednesday 18 March 2026 05:12:03 +0000 (0:00:00.178) 0:28:34.812 ******* 2026-03-18 05:12:21.038508 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038518 | orchestrator | 2026-03-18 05:12:21.038527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:12:21.038537 | orchestrator | Wednesday 18 March 2026 05:12:03 +0000 (0:00:00.175) 0:28:34.987 ******* 2026-03-18 05:12:21.038547 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038556 | orchestrator | 2026-03-18 05:12:21.038566 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:12:21.038645 | orchestrator | Wednesday 18 March 2026 05:12:03 +0000 (0:00:00.497) 0:28:35.485 ******* 2026-03-18 05:12:21.038657 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:12:21.038667 | orchestrator | 2026-03-18 05:12:21.038675 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:12:21.038684 | orchestrator | Wednesday 18 March 2026 05:12:04 +0000 (0:00:00.270) 0:28:35.756 ******* 2026-03-18 05:12:21.038692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:12:21.038702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:12:21.038709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:12:21.038718 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038726 | orchestrator | 2026-03-18 05:12:21.038734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:12:21.038745 | orchestrator | Wednesday 18 March 2026 05:12:04 +0000 (0:00:00.475) 0:28:36.232 ******* 2026-03-18 05:12:21.038755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:12:21.038765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:12:21.038775 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:12:21.038784 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038793 | orchestrator | 2026-03-18 05:12:21.038802 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:12:21.038810 | orchestrator | Wednesday 18 March 2026 05:12:05 +0000 (0:00:00.464) 0:28:36.696 ******* 2026-03-18 05:12:21.038825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-18 05:12:21.038833 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-18 05:12:21.038842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-18 05:12:21.038852 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.038861 | orchestrator | 2026-03-18 05:12:21.038870 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:12:21.038879 | orchestrator | Wednesday 18 March 2026 05:12:05 +0000 (0:00:00.459) 0:28:37.156 ******* 2026-03-18 05:12:21.038888 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:12:21.038898 | orchestrator | 2026-03-18 05:12:21.038906 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:12:21.038914 | orchestrator | Wednesday 18 March 2026 05:12:05 +0000 (0:00:00.212) 0:28:37.368 ******* 2026-03-18 05:12:21.038924 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-18 05:12:21.038933 | orchestrator | 2026-03-18 05:12:21.038941 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:12:21.038949 | orchestrator | Wednesday 18 March 2026 05:12:06 +0000 (0:00:00.487) 0:28:37.856 ******* 2026-03-18 05:12:21.038956 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:12:21.038962 | orchestrator | 2026-03-18 05:12:21.038968 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-18 05:12:21.038974 | orchestrator | Wednesday 18 March 2026 05:12:07 +0000 (0:00:00.842) 0:28:38.698 ******* 2026-03-18 05:12:21.038980 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-18 05:12:21.038986 | orchestrator | 2026-03-18 05:12:21.039007 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:12:21.039014 | orchestrator | Wednesday 18 March 2026 05:12:07 +0000 (0:00:00.207) 0:28:38.905 ******* 2026-03-18 05:12:21.039019 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:12:21.039025 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:12:21.039030 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:12:21.039035 | orchestrator | 2026-03-18 05:12:21.039040 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:12:21.039045 | orchestrator | Wednesday 18 March 2026 05:12:09 +0000 (0:00:02.205) 0:28:41.110 ******* 2026-03-18 05:12:21.039059 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-18 05:12:21.039065 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-18 05:12:21.039070 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:12:21.039075 | orchestrator | 2026-03-18 05:12:21.039080 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-18 05:12:21.039085 | orchestrator | Wednesday 18 March 2026 05:12:10 +0000 (0:00:01.023) 0:28:42.134 ******* 2026-03-18 05:12:21.039090 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.039095 | orchestrator | 2026-03-18 05:12:21.039100 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-18 05:12:21.039105 | orchestrator | Wednesday 18 March 2026 05:12:10 +0000 (0:00:00.439) 0:28:42.573 ******* 2026-03-18 05:12:21.039110 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-18 05:12:21.039116 | orchestrator | 2026-03-18 05:12:21.039121 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-18 05:12:21.039126 | orchestrator | Wednesday 18 March 2026 05:12:11 +0000 (0:00:00.211) 0:28:42.784 ******* 2026-03-18 05:12:21.039131 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:12:21.039138 | orchestrator | 2026-03-18 05:12:21.039143 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-18 05:12:21.039148 | orchestrator | Wednesday 18 March 2026 05:12:11 +0000 (0:00:00.624) 0:28:43.409 ******* 2026-03-18 05:12:21.039153 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:12:21.039158 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 05:12:21.039164 | orchestrator | 2026-03-18 05:12:21.039169 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:12:21.039174 | orchestrator | Wednesday 18 March 2026 05:12:15 +0000 (0:00:04.070) 0:28:47.480 ******* 2026-03-18 05:12:21.039179 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:12:21.039184 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:12:21.039189 | orchestrator | 2026-03-18 05:12:21.039194 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:12:21.039199 | orchestrator | Wednesday 18 March 2026 05:12:17 +0000 (0:00:02.022) 0:28:49.502 ******* 2026-03-18 05:12:21.039204 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-18 05:12:21.039209 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:12:21.039214 | orchestrator | 2026-03-18 05:12:21.039219 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-18 05:12:21.039224 | orchestrator | Wednesday 18 March 2026 05:12:18 +0000 (0:00:00.982) 0:28:50.485 ******* 2026-03-18 05:12:21.039229 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-18 05:12:21.039234 | orchestrator | 2026-03-18 05:12:21.039242 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-18 05:12:21.039250 | orchestrator | Wednesday 18 March 2026 05:12:19 +0000 (0:00:00.255) 0:28:50.740 ******* 2026-03-18 05:12:21.039258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039316 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:12:21.039324 | orchestrator | 2026-03-18 05:12:21.039332 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-18 05:12:21.039340 | orchestrator | Wednesday 18 March 2026 05:12:20 +0000 (0:00:00.945) 0:28:51.686 ******* 2026-03-18 05:12:21.039348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:12:21.039379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:13:06.671405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:13:06.671523 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:13:06.671543 | orchestrator | 2026-03-18 05:13:06.671555 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-18 05:13:06.671568 | orchestrator | Wednesday 18 March 2026 05:12:21 +0000 (0:00:00.953) 0:28:52.640 ******* 2026-03-18 05:13:06.671579 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:13:06.671592 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:13:06.671603 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:13:06.671614 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:13:06.671625 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:13:06.671636 | orchestrator | 2026-03-18 05:13:06.671647 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-18 05:13:06.671658 | orchestrator | Wednesday 18 March 2026 05:12:51 +0000 (0:00:30.547) 0:29:23.187 ******* 2026-03-18 05:13:06.671669 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:13:06.671680 | orchestrator | 2026-03-18 05:13:06.671690 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-18 05:13:06.671701 | orchestrator | Wednesday 18 March 2026 05:12:51 +0000 (0:00:00.126) 0:29:23.314 ******* 2026-03-18 05:13:06.671712 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:13:06.671723 | orchestrator | 2026-03-18 05:13:06.671734 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-18 05:13:06.671744 | orchestrator | Wednesday 18 March 2026 05:12:52 +0000 (0:00:00.432) 0:29:23.747 ******* 2026-03-18 05:13:06.671755 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-18 05:13:06.671767 | orchestrator | 2026-03-18 05:13:06.671777 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-18 05:13:06.671788 | orchestrator | Wednesday 18 March 2026 05:12:52 +0000 (0:00:00.229) 0:29:23.976 ******* 2026-03-18 05:13:06.671799 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-18 05:13:06.671810 | orchestrator | 2026-03-18 05:13:06.671820 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-18 05:13:06.671831 | orchestrator | Wednesday 18 March 2026 05:12:52 +0000 (0:00:00.217) 0:29:24.194 ******* 2026-03-18 05:13:06.671869 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:13:06.671881 | orchestrator | 2026-03-18 05:13:06.671892 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-18 05:13:06.671903 | orchestrator | Wednesday 18 March 2026 05:12:53 +0000 (0:00:01.081) 0:29:25.276 ******* 2026-03-18 05:13:06.671914 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:13:06.671925 | orchestrator | 2026-03-18 05:13:06.671936 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-18 05:13:06.671949 | orchestrator | Wednesday 18 March 2026 05:12:54 +0000 (0:00:00.919) 0:29:26.195 ******* 2026-03-18 05:13:06.671962 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:13:06.671974 | orchestrator | 2026-03-18 05:13:06.671987 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-18 05:13:06.672000 | orchestrator | Wednesday 18 March 2026 05:12:55 +0000 (0:00:01.228) 0:29:27.424 ******* 2026-03-18 05:13:06.672027 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-18 05:13:06.672041 | orchestrator | 2026-03-18 05:13:06.672053 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-18 05:13:06.672066 | orchestrator | 2026-03-18 05:13:06.672078 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:13:06.672092 | orchestrator | Wednesday 18 March 2026 05:12:58 +0000 (0:00:02.466) 0:29:29.890 ******* 2026-03-18 05:13:06.672105 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-18 05:13:06.672115 | orchestrator | 2026-03-18 05:13:06.672126 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-18 05:13:06.672208 | orchestrator | Wednesday 18 March 2026 05:12:58 +0000 (0:00:00.257) 0:29:30.148 ******* 2026-03-18 05:13:06.672222 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672233 | orchestrator | 2026-03-18 05:13:06.672244 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-18 05:13:06.672254 | orchestrator | Wednesday 18 March 2026 05:12:59 +0000 (0:00:00.720) 0:29:30.868 ******* 2026-03-18 05:13:06.672265 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672276 | orchestrator | 2026-03-18 05:13:06.672287 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:13:06.672297 | orchestrator | Wednesday 18 March 2026 05:12:59 +0000 (0:00:00.150) 0:29:31.019 ******* 2026-03-18 05:13:06.672308 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672318 | orchestrator | 2026-03-18 05:13:06.672329 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:13:06.672339 | orchestrator | Wednesday 18 March 2026 05:12:59 +0000 (0:00:00.501) 0:29:31.520 ******* 2026-03-18 05:13:06.672350 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672361 | orchestrator | 2026-03-18 05:13:06.672389 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-18 05:13:06.672401 | orchestrator | Wednesday 18 March 2026 05:13:00 +0000 (0:00:00.167) 0:29:31.688 ******* 2026-03-18 05:13:06.672411 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672422 | orchestrator | 2026-03-18 05:13:06.672433 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-18 05:13:06.672443 | orchestrator | Wednesday 18 March 2026 05:13:00 +0000 (0:00:00.172) 0:29:31.860 ******* 2026-03-18 05:13:06.672454 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672465 | orchestrator | 2026-03-18 05:13:06.672475 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-18 05:13:06.672486 | orchestrator | Wednesday 18 March 2026 05:13:00 +0000 (0:00:00.160) 0:29:32.021 ******* 2026-03-18 05:13:06.672497 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:06.672507 | orchestrator | 2026-03-18 05:13:06.672518 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-18 05:13:06.672529 | orchestrator | Wednesday 18 March 2026 05:13:00 +0000 (0:00:00.161) 0:29:32.182 ******* 2026-03-18 05:13:06.672549 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672560 | orchestrator | 2026-03-18 05:13:06.672571 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-18 05:13:06.672582 | orchestrator | Wednesday 18 March 2026 05:13:00 +0000 (0:00:00.164) 0:29:32.347 ******* 2026-03-18 05:13:06.672592 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:13:06.672603 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:13:06.672614 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:13:06.672624 | orchestrator | 2026-03-18 05:13:06.672635 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-18 05:13:06.672645 | orchestrator | Wednesday 18 March 2026 05:13:01 +0000 (0:00:01.098) 0:29:33.446 ******* 2026-03-18 05:13:06.672656 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:06.672667 | orchestrator | 2026-03-18 05:13:06.672677 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-18 05:13:06.672688 | orchestrator | Wednesday 18 March 2026 05:13:02 +0000 (0:00:00.277) 0:29:33.724 ******* 2026-03-18 05:13:06.672698 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:13:06.672709 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:13:06.672719 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:13:06.672730 | orchestrator | 2026-03-18 05:13:06.672741 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-18 05:13:06.672751 | orchestrator | Wednesday 18 March 2026 05:13:04 +0000 (0:00:02.245) 0:29:35.969 ******* 2026-03-18 05:13:06.672762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:13:06.672773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:13:06.672783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:13:06.672794 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:06.672805 | orchestrator | 2026-03-18 05:13:06.672815 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-18 05:13:06.672826 | orchestrator | Wednesday 18 March 2026 05:13:05 +0000 (0:00:00.797) 0:29:36.767 ******* 2026-03-18 05:13:06.672839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-18 05:13:06.672852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-18 05:13:06.672870 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-18 05:13:06.672881 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:06.672892 | orchestrator | 2026-03-18 05:13:06.672903 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-18 05:13:06.672913 | orchestrator | Wednesday 18 March 2026 05:13:06 +0000 (0:00:01.008) 0:29:37.776 ******* 2026-03-18 05:13:06.672926 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:06.672947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:10.864440 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:10.864541 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.864557 | orchestrator | 2026-03-18 05:13:10.864570 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-18 05:13:10.864582 | orchestrator | Wednesday 18 March 2026 05:13:06 +0000 (0:00:00.499) 0:29:38.275 ******* 2026-03-18 05:13:10.864597 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'f231ed715636', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-18 05:13:02.633643', 'end': '2026-03-18 05:13:02.684670', 'delta': '0:00:00.051027', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f231ed715636'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-18 05:13:10.864612 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c6b616adb9bf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-18 05:13:03.584630', 'end': '2026-03-18 05:13:03.640437', 'delta': '0:00:00.055807', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c6b616adb9bf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-18 05:13:10.864640 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '38d5679b5612', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-18 05:13:04.154497', 'end': '2026-03-18 05:13:04.205073', 'delta': '0:00:00.050576', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d5679b5612'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-18 05:13:10.864652 | orchestrator | 2026-03-18 05:13:10.864663 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-18 05:13:10.864675 | orchestrator | Wednesday 18 March 2026 05:13:06 +0000 (0:00:00.227) 0:29:38.503 ******* 2026-03-18 05:13:10.864685 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.864697 | orchestrator | 2026-03-18 05:13:10.864708 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-18 05:13:10.864725 | orchestrator | Wednesday 18 March 2026 05:13:07 +0000 (0:00:00.283) 0:29:38.787 ******* 2026-03-18 05:13:10.864744 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.864826 | orchestrator | 2026-03-18 05:13:10.864838 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-18 05:13:10.864849 | orchestrator | Wednesday 18 March 2026 05:13:07 +0000 (0:00:00.259) 0:29:39.046 ******* 2026-03-18 05:13:10.864860 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.864871 | orchestrator | 2026-03-18 05:13:10.864881 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-18 05:13:10.864892 | orchestrator | Wednesday 18 March 2026 05:13:07 +0000 (0:00:00.162) 0:29:39.208 ******* 2026-03-18 05:13:10.864903 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-18 05:13:10.864914 | orchestrator | 2026-03-18 05:13:10.864925 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:13:10.864938 | orchestrator | Wednesday 18 March 2026 05:13:08 +0000 (0:00:00.976) 0:29:40.184 ******* 2026-03-18 05:13:10.864951 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.864963 | orchestrator | 2026-03-18 05:13:10.864975 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-18 05:13:10.864988 | orchestrator | Wednesday 18 March 2026 05:13:08 +0000 (0:00:00.163) 0:29:40.348 ******* 2026-03-18 05:13:10.865018 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865032 | orchestrator | 2026-03-18 05:13:10.865044 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-18 05:13:10.865057 | orchestrator | Wednesday 18 March 2026 05:13:08 +0000 (0:00:00.138) 0:29:40.486 ******* 2026-03-18 05:13:10.865069 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865081 | orchestrator | 2026-03-18 05:13:10.865094 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-18 05:13:10.865106 | orchestrator | Wednesday 18 March 2026 05:13:09 +0000 (0:00:00.248) 0:29:40.734 ******* 2026-03-18 05:13:10.865191 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865205 | orchestrator | 2026-03-18 05:13:10.865218 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-18 05:13:10.865230 | orchestrator | Wednesday 18 March 2026 05:13:09 +0000 (0:00:00.209) 0:29:40.944 ******* 2026-03-18 05:13:10.865242 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865255 | orchestrator | 2026-03-18 05:13:10.865268 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-18 05:13:10.865280 | orchestrator | Wednesday 18 March 2026 05:13:09 +0000 (0:00:00.140) 0:29:41.084 ******* 2026-03-18 05:13:10.865292 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.865302 | orchestrator | 2026-03-18 05:13:10.865313 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-18 05:13:10.865324 | orchestrator | Wednesday 18 March 2026 05:13:09 +0000 (0:00:00.185) 0:29:41.269 ******* 2026-03-18 05:13:10.865335 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865346 | orchestrator | 2026-03-18 05:13:10.865357 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-18 05:13:10.865366 | orchestrator | Wednesday 18 March 2026 05:13:09 +0000 (0:00:00.147) 0:29:41.417 ******* 2026-03-18 05:13:10.865376 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.865385 | orchestrator | 2026-03-18 05:13:10.865395 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-18 05:13:10.865404 | orchestrator | Wednesday 18 March 2026 05:13:10 +0000 (0:00:00.510) 0:29:41.928 ******* 2026-03-18 05:13:10.865414 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:10.865423 | orchestrator | 2026-03-18 05:13:10.865433 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-18 05:13:10.865443 | orchestrator | Wednesday 18 March 2026 05:13:10 +0000 (0:00:00.152) 0:29:42.081 ******* 2026-03-18 05:13:10.865452 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:10.865462 | orchestrator | 2026-03-18 05:13:10.865471 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-18 05:13:10.865481 | orchestrator | Wednesday 18 March 2026 05:13:10 +0000 (0:00:00.176) 0:29:42.257 ******* 2026-03-18 05:13:10.865491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:10.865518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}})  2026-03-18 05:13:10.865530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:13:10.865550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}})  2026-03-18 05:13:11.003028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-18 05:13:11.003263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}})  2026-03-18 05:13:11.003383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}})  2026-03-18 05:13:11.003405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-18 05:13:11.003476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-18 05:13:11.003528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-18 05:13:11.227405 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:11.227535 | orchestrator | 2026-03-18 05:13:11.227562 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-18 05:13:11.227575 | orchestrator | Wednesday 18 March 2026 05:13:10 +0000 (0:00:00.356) 0:29:42.613 ******* 2026-03-18 05:13:11.227590 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f', 'dm-uuid-LVM-IyJ409WPQ2Ewwg643e4T8GcTWsVLXvc4PfxdfcUZHCmpn1f575ZO5FoE28c03VdS'], 'uuids': ['0c1ae19d-2c32-4e94-8f09-c34bb952e967'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216', 'scsi-SQEMU_QEMU_HARDDISK_343cfa22-0406-40f4-a0e7-97fc1bbcc216'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '343cfa22', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-wEEZ4B-D8dq-p1QG-iT9B-teZl-6bRA-4Rtw7V', 'scsi-0QEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00', 'scsi-SQEMU_QEMU_HARDDISK_92bad715-eec0-475b-8af5-3664f3458c00'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-18-01-18-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227766 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw', 'dm-uuid-CRYPT-LUKS2-b658b175f7d84bc1a9acacbdfc2fb3a4-T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:11.227814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--def37aef--ab10--5729--81f7--b9371c5efcea-osd--block--def37aef--ab10--5729--81f7--b9371c5efcea', 'dm-uuid-LVM-datDZvt3H0VWDhIXtfyG2nxxdM9DebWAT9QYVvDcd9eNFRbEejIJhI9dObKuqGRw'], 'uuids': ['b658b175-f7d8-4bc1-a9ac-acbdfc2fb3a4'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '92bad715', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['T9QYVv-Dcd9-eNFR-bEej-IJhI-9dOb-KuqGRw']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-yCkM9t-1XKI-b30Y-UmhR-lcOf-KBlN-LK1ss0', 'scsi-0QEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568', 'scsi-SQEMU_QEMU_HARDDISK_54344bae-1dab-46bd-b563-a8bed09fd568'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '54344bae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f498c8c9--64fb--5c46--ab13--dfed2090c41f-osd--block--f498c8c9--64fb--5c46--ab13--dfed2090c41f']}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870304 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '15119f5e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1', 'scsi-SQEMU_QEMU_HARDDISK_15119f5e-a47f-40dc-b692-43d931272403-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870379 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870399 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS', 'dm-uuid-CRYPT-LUKS2-0c1ae19d2c324e948f09c34bb952e967-Pfxdfc-UZHC-mpn1-f575-ZO5F-oE28-c03VdS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-18 05:13:14.870423 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:14.870436 | orchestrator | 2026-03-18 05:13:14.870447 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-18 05:13:14.870459 | orchestrator | Wednesday 18 March 2026 05:13:11 +0000 (0:00:00.397) 0:29:43.010 ******* 2026-03-18 05:13:14.870470 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:14.870482 | orchestrator | 2026-03-18 05:13:14.870493 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-18 05:13:14.870504 | orchestrator | Wednesday 18 March 2026 05:13:11 +0000 (0:00:00.480) 0:29:43.491 ******* 2026-03-18 05:13:14.870520 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:14.870532 | orchestrator | 2026-03-18 05:13:14.870543 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:13:14.870554 | orchestrator | Wednesday 18 March 2026 05:13:12 +0000 (0:00:00.141) 0:29:43.633 ******* 2026-03-18 05:13:14.870569 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:14.870586 | orchestrator | 2026-03-18 05:13:14.870602 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:13:14.870617 | orchestrator | Wednesday 18 March 2026 05:13:12 +0000 (0:00:00.492) 0:29:44.125 ******* 2026-03-18 05:13:14.870634 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:14.870651 | orchestrator | 2026-03-18 05:13:14.870670 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-18 05:13:14.870682 | orchestrator | Wednesday 18 March 2026 05:13:12 +0000 (0:00:00.135) 0:29:44.260 ******* 2026-03-18 05:13:14.870694 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:14.870711 | orchestrator | 2026-03-18 05:13:14.870727 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-18 05:13:14.870769 | orchestrator | Wednesday 18 March 2026 05:13:12 +0000 (0:00:00.241) 0:29:44.502 ******* 2026-03-18 05:13:14.870787 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:14.870800 | orchestrator | 2026-03-18 05:13:14.870811 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-18 05:13:14.870822 | orchestrator | Wednesday 18 March 2026 05:13:13 +0000 (0:00:00.170) 0:29:44.672 ******* 2026-03-18 05:13:14.870834 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-18 05:13:14.870845 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-18 05:13:14.870863 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-18 05:13:14.870873 | orchestrator | 2026-03-18 05:13:14.870883 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-18 05:13:14.870892 | orchestrator | Wednesday 18 March 2026 05:13:14 +0000 (0:00:01.030) 0:29:45.702 ******* 2026-03-18 05:13:14.870902 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-18 05:13:14.870911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-18 05:13:14.870921 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-18 05:13:14.870931 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:14.870940 | orchestrator | 2026-03-18 05:13:14.870950 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-18 05:13:14.870960 | orchestrator | Wednesday 18 March 2026 05:13:14 +0000 (0:00:00.213) 0:29:45.916 ******* 2026-03-18 05:13:14.870969 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-18 05:13:14.870979 | orchestrator | 2026-03-18 05:13:14.870998 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:13:31.480371 | orchestrator | Wednesday 18 March 2026 05:13:14 +0000 (0:00:00.562) 0:29:46.478 ******* 2026-03-18 05:13:31.480505 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480524 | orchestrator | 2026-03-18 05:13:31.480537 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:13:31.480548 | orchestrator | Wednesday 18 March 2026 05:13:15 +0000 (0:00:00.163) 0:29:46.642 ******* 2026-03-18 05:13:31.480559 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480570 | orchestrator | 2026-03-18 05:13:31.480582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:13:31.480593 | orchestrator | Wednesday 18 March 2026 05:13:15 +0000 (0:00:00.158) 0:29:46.800 ******* 2026-03-18 05:13:31.480603 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480614 | orchestrator | 2026-03-18 05:13:31.480625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:13:31.480635 | orchestrator | Wednesday 18 March 2026 05:13:15 +0000 (0:00:00.179) 0:29:46.980 ******* 2026-03-18 05:13:31.480646 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.480658 | orchestrator | 2026-03-18 05:13:31.480668 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:13:31.480679 | orchestrator | Wednesday 18 March 2026 05:13:15 +0000 (0:00:00.299) 0:29:47.280 ******* 2026-03-18 05:13:31.480691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:13:31.480702 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:13:31.480713 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:13:31.480724 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480734 | orchestrator | 2026-03-18 05:13:31.480745 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:13:31.480756 | orchestrator | Wednesday 18 March 2026 05:13:16 +0000 (0:00:00.445) 0:29:47.725 ******* 2026-03-18 05:13:31.480767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:13:31.480778 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:13:31.480788 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:13:31.480799 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480810 | orchestrator | 2026-03-18 05:13:31.480821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:13:31.480831 | orchestrator | Wednesday 18 March 2026 05:13:16 +0000 (0:00:00.428) 0:29:48.154 ******* 2026-03-18 05:13:31.480842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:13:31.480853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:13:31.480864 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:13:31.480900 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.480914 | orchestrator | 2026-03-18 05:13:31.480926 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:13:31.480938 | orchestrator | Wednesday 18 March 2026 05:13:16 +0000 (0:00:00.410) 0:29:48.564 ******* 2026-03-18 05:13:31.480951 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.480963 | orchestrator | 2026-03-18 05:13:31.480989 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:13:31.481003 | orchestrator | Wednesday 18 March 2026 05:13:17 +0000 (0:00:00.172) 0:29:48.737 ******* 2026-03-18 05:13:31.481042 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:13:31.481054 | orchestrator | 2026-03-18 05:13:31.481067 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-18 05:13:31.481079 | orchestrator | Wednesday 18 March 2026 05:13:17 +0000 (0:00:00.357) 0:29:49.094 ******* 2026-03-18 05:13:31.481092 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:13:31.481105 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:13:31.481117 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:13:31.481130 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:13:31.481142 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:13:31.481155 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:13:31.481166 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:13:31.481176 | orchestrator | 2026-03-18 05:13:31.481187 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-18 05:13:31.481198 | orchestrator | Wednesday 18 March 2026 05:13:18 +0000 (0:00:01.211) 0:29:50.306 ******* 2026-03-18 05:13:31.481208 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-18 05:13:31.481219 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-18 05:13:31.481229 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-18 05:13:31.481240 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-18 05:13:31.481251 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-18 05:13:31.481261 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-18 05:13:31.481272 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-18 05:13:31.481282 | orchestrator | 2026-03-18 05:13:31.481293 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-18 05:13:31.481303 | orchestrator | Wednesday 18 March 2026 05:13:20 +0000 (0:00:02.095) 0:29:52.402 ******* 2026-03-18 05:13:31.481314 | orchestrator | changed: [testbed-node-5] 2026-03-18 05:13:31.481325 | orchestrator | 2026-03-18 05:13:31.481354 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-18 05:13:31.481365 | orchestrator | Wednesday 18 March 2026 05:13:23 +0000 (0:00:02.331) 0:29:54.734 ******* 2026-03-18 05:13:31.481377 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:13:31.481389 | orchestrator | 2026-03-18 05:13:31.481400 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-18 05:13:31.481411 | orchestrator | Wednesday 18 March 2026 05:13:25 +0000 (0:00:02.038) 0:29:56.773 ******* 2026-03-18 05:13:31.481422 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:13:31.481432 | orchestrator | 2026-03-18 05:13:31.481443 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:13:31.481462 | orchestrator | Wednesday 18 March 2026 05:13:26 +0000 (0:00:01.297) 0:29:58.070 ******* 2026-03-18 05:13:31.481473 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-18 05:13:31.481484 | orchestrator | 2026-03-18 05:13:31.481495 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:13:31.481505 | orchestrator | Wednesday 18 March 2026 05:13:26 +0000 (0:00:00.221) 0:29:58.291 ******* 2026-03-18 05:13:31.481516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-18 05:13:31.481527 | orchestrator | 2026-03-18 05:13:31.481538 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:13:31.481548 | orchestrator | Wednesday 18 March 2026 05:13:26 +0000 (0:00:00.238) 0:29:58.530 ******* 2026-03-18 05:13:31.481559 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.481570 | orchestrator | 2026-03-18 05:13:31.481581 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:13:31.481591 | orchestrator | Wednesday 18 March 2026 05:13:27 +0000 (0:00:00.140) 0:29:58.671 ******* 2026-03-18 05:13:31.481602 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.481613 | orchestrator | 2026-03-18 05:13:31.481624 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:13:31.481635 | orchestrator | Wednesday 18 March 2026 05:13:27 +0000 (0:00:00.528) 0:29:59.199 ******* 2026-03-18 05:13:31.481645 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.481656 | orchestrator | 2026-03-18 05:13:31.481667 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:13:31.481678 | orchestrator | Wednesday 18 March 2026 05:13:28 +0000 (0:00:00.572) 0:29:59.772 ******* 2026-03-18 05:13:31.481688 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.481699 | orchestrator | 2026-03-18 05:13:31.481710 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:13:31.481721 | orchestrator | Wednesday 18 March 2026 05:13:28 +0000 (0:00:00.570) 0:30:00.343 ******* 2026-03-18 05:13:31.481732 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.481744 | orchestrator | 2026-03-18 05:13:31.481765 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:13:31.481791 | orchestrator | Wednesday 18 March 2026 05:13:28 +0000 (0:00:00.137) 0:30:00.480 ******* 2026-03-18 05:13:31.481810 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.481830 | orchestrator | 2026-03-18 05:13:31.481850 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:13:31.481863 | orchestrator | Wednesday 18 March 2026 05:13:29 +0000 (0:00:00.439) 0:30:00.919 ******* 2026-03-18 05:13:31.481874 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.481885 | orchestrator | 2026-03-18 05:13:31.481895 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:13:31.481906 | orchestrator | Wednesday 18 March 2026 05:13:29 +0000 (0:00:00.159) 0:30:01.079 ******* 2026-03-18 05:13:31.481917 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.481927 | orchestrator | 2026-03-18 05:13:31.481938 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:13:31.481949 | orchestrator | Wednesday 18 March 2026 05:13:30 +0000 (0:00:00.545) 0:30:01.625 ******* 2026-03-18 05:13:31.481959 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.481970 | orchestrator | 2026-03-18 05:13:31.481981 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:13:31.481992 | orchestrator | Wednesday 18 March 2026 05:13:30 +0000 (0:00:00.548) 0:30:02.173 ******* 2026-03-18 05:13:31.482130 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.482159 | orchestrator | 2026-03-18 05:13:31.482171 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:13:31.482183 | orchestrator | Wednesday 18 March 2026 05:13:30 +0000 (0:00:00.141) 0:30:02.315 ******* 2026-03-18 05:13:31.482202 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.482232 | orchestrator | 2026-03-18 05:13:31.482252 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:13:31.482272 | orchestrator | Wednesday 18 March 2026 05:13:30 +0000 (0:00:00.136) 0:30:02.452 ******* 2026-03-18 05:13:31.482291 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.482310 | orchestrator | 2026-03-18 05:13:31.482329 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:13:31.482346 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.169) 0:30:02.621 ******* 2026-03-18 05:13:31.482365 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.482383 | orchestrator | 2026-03-18 05:13:31.482401 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:13:31.482412 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.173) 0:30:02.794 ******* 2026-03-18 05:13:31.482423 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:31.482433 | orchestrator | 2026-03-18 05:13:31.482444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:13:31.482455 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.151) 0:30:02.946 ******* 2026-03-18 05:13:31.482465 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:31.482476 | orchestrator | 2026-03-18 05:13:31.482497 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:13:43.383665 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.136) 0:30:03.083 ******* 2026-03-18 05:13:43.383796 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.383824 | orchestrator | 2026-03-18 05:13:43.383845 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:13:43.383864 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.149) 0:30:03.233 ******* 2026-03-18 05:13:43.383881 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.383899 | orchestrator | 2026-03-18 05:13:43.383923 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:13:43.383997 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.131) 0:30:03.364 ******* 2026-03-18 05:13:43.384013 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.384025 | orchestrator | 2026-03-18 05:13:43.384036 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:13:43.384047 | orchestrator | Wednesday 18 March 2026 05:13:31 +0000 (0:00:00.165) 0:30:03.529 ******* 2026-03-18 05:13:43.384058 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.384069 | orchestrator | 2026-03-18 05:13:43.384080 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-18 05:13:43.384091 | orchestrator | Wednesday 18 March 2026 05:13:32 +0000 (0:00:00.550) 0:30:04.080 ******* 2026-03-18 05:13:43.384102 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384113 | orchestrator | 2026-03-18 05:13:43.384124 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-18 05:13:43.384135 | orchestrator | Wednesday 18 March 2026 05:13:32 +0000 (0:00:00.153) 0:30:04.234 ******* 2026-03-18 05:13:43.384145 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384156 | orchestrator | 2026-03-18 05:13:43.384167 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-18 05:13:43.384179 | orchestrator | Wednesday 18 March 2026 05:13:32 +0000 (0:00:00.139) 0:30:04.373 ******* 2026-03-18 05:13:43.384191 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384204 | orchestrator | 2026-03-18 05:13:43.384217 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-18 05:13:43.384230 | orchestrator | Wednesday 18 March 2026 05:13:32 +0000 (0:00:00.157) 0:30:04.531 ******* 2026-03-18 05:13:43.384243 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384255 | orchestrator | 2026-03-18 05:13:43.384268 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-18 05:13:43.384281 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.144) 0:30:04.675 ******* 2026-03-18 05:13:43.384293 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384332 | orchestrator | 2026-03-18 05:13:43.384345 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-18 05:13:43.384358 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.114) 0:30:04.790 ******* 2026-03-18 05:13:43.384369 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384380 | orchestrator | 2026-03-18 05:13:43.384390 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-18 05:13:43.384402 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.149) 0:30:04.939 ******* 2026-03-18 05:13:43.384412 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384423 | orchestrator | 2026-03-18 05:13:43.384448 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-18 05:13:43.384461 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.138) 0:30:05.078 ******* 2026-03-18 05:13:43.384471 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384482 | orchestrator | 2026-03-18 05:13:43.384493 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-18 05:13:43.384503 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.143) 0:30:05.221 ******* 2026-03-18 05:13:43.384514 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384525 | orchestrator | 2026-03-18 05:13:43.384535 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-18 05:13:43.384546 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.140) 0:30:05.362 ******* 2026-03-18 05:13:43.384557 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384567 | orchestrator | 2026-03-18 05:13:43.384578 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-18 05:13:43.384589 | orchestrator | Wednesday 18 March 2026 05:13:33 +0000 (0:00:00.163) 0:30:05.525 ******* 2026-03-18 05:13:43.384599 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384610 | orchestrator | 2026-03-18 05:13:43.384621 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-18 05:13:43.384631 | orchestrator | Wednesday 18 March 2026 05:13:34 +0000 (0:00:00.125) 0:30:05.651 ******* 2026-03-18 05:13:43.384642 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384653 | orchestrator | 2026-03-18 05:13:43.384663 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-18 05:13:43.384674 | orchestrator | Wednesday 18 March 2026 05:13:34 +0000 (0:00:00.552) 0:30:06.204 ******* 2026-03-18 05:13:43.384685 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.384696 | orchestrator | 2026-03-18 05:13:43.384706 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-18 05:13:43.384718 | orchestrator | Wednesday 18 March 2026 05:13:35 +0000 (0:00:00.948) 0:30:07.152 ******* 2026-03-18 05:13:43.384728 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.384739 | orchestrator | 2026-03-18 05:13:43.384750 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-18 05:13:43.384761 | orchestrator | Wednesday 18 March 2026 05:13:36 +0000 (0:00:01.229) 0:30:08.382 ******* 2026-03-18 05:13:43.384771 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-18 05:13:43.384783 | orchestrator | 2026-03-18 05:13:43.384794 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-18 05:13:43.384804 | orchestrator | Wednesday 18 March 2026 05:13:37 +0000 (0:00:00.237) 0:30:08.619 ******* 2026-03-18 05:13:43.384815 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384826 | orchestrator | 2026-03-18 05:13:43.384837 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-18 05:13:43.384866 | orchestrator | Wednesday 18 March 2026 05:13:37 +0000 (0:00:00.182) 0:30:08.802 ******* 2026-03-18 05:13:43.384878 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.384888 | orchestrator | 2026-03-18 05:13:43.384899 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-18 05:13:43.384910 | orchestrator | Wednesday 18 March 2026 05:13:37 +0000 (0:00:00.141) 0:30:08.944 ******* 2026-03-18 05:13:43.384930 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-18 05:13:43.384941 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-18 05:13:43.385058 | orchestrator | 2026-03-18 05:13:43.385071 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-18 05:13:43.385082 | orchestrator | Wednesday 18 March 2026 05:13:38 +0000 (0:00:00.824) 0:30:09.768 ******* 2026-03-18 05:13:43.385093 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.385104 | orchestrator | 2026-03-18 05:13:43.385115 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-18 05:13:43.385125 | orchestrator | Wednesday 18 March 2026 05:13:38 +0000 (0:00:00.490) 0:30:10.258 ******* 2026-03-18 05:13:43.385136 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385147 | orchestrator | 2026-03-18 05:13:43.385158 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-18 05:13:43.385169 | orchestrator | Wednesday 18 March 2026 05:13:38 +0000 (0:00:00.162) 0:30:10.420 ******* 2026-03-18 05:13:43.385180 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385190 | orchestrator | 2026-03-18 05:13:43.385201 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-18 05:13:43.385212 | orchestrator | Wednesday 18 March 2026 05:13:38 +0000 (0:00:00.152) 0:30:10.573 ******* 2026-03-18 05:13:43.385223 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385234 | orchestrator | 2026-03-18 05:13:43.385244 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-18 05:13:43.385255 | orchestrator | Wednesday 18 March 2026 05:13:39 +0000 (0:00:00.143) 0:30:10.716 ******* 2026-03-18 05:13:43.385266 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-18 05:13:43.385276 | orchestrator | 2026-03-18 05:13:43.385287 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-18 05:13:43.385298 | orchestrator | Wednesday 18 March 2026 05:13:39 +0000 (0:00:00.557) 0:30:11.274 ******* 2026-03-18 05:13:43.385309 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.385319 | orchestrator | 2026-03-18 05:13:43.385331 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-18 05:13:43.385341 | orchestrator | Wednesday 18 March 2026 05:13:40 +0000 (0:00:00.687) 0:30:11.962 ******* 2026-03-18 05:13:43.385352 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-18 05:13:43.385363 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-18 05:13:43.385374 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-18 05:13:43.385391 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385403 | orchestrator | 2026-03-18 05:13:43.385413 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-18 05:13:43.385424 | orchestrator | Wednesday 18 March 2026 05:13:40 +0000 (0:00:00.161) 0:30:12.124 ******* 2026-03-18 05:13:43.385435 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385446 | orchestrator | 2026-03-18 05:13:43.385457 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-18 05:13:43.385468 | orchestrator | Wednesday 18 March 2026 05:13:40 +0000 (0:00:00.128) 0:30:12.252 ******* 2026-03-18 05:13:43.385479 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385489 | orchestrator | 2026-03-18 05:13:43.385500 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-18 05:13:43.385511 | orchestrator | Wednesday 18 March 2026 05:13:40 +0000 (0:00:00.190) 0:30:12.443 ******* 2026-03-18 05:13:43.385522 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385533 | orchestrator | 2026-03-18 05:13:43.385544 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-18 05:13:43.385555 | orchestrator | Wednesday 18 March 2026 05:13:40 +0000 (0:00:00.164) 0:30:12.608 ******* 2026-03-18 05:13:43.385574 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385585 | orchestrator | 2026-03-18 05:13:43.385596 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-18 05:13:43.385606 | orchestrator | Wednesday 18 March 2026 05:13:41 +0000 (0:00:00.168) 0:30:12.777 ******* 2026-03-18 05:13:43.385617 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385628 | orchestrator | 2026-03-18 05:13:43.385639 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-18 05:13:43.385649 | orchestrator | Wednesday 18 March 2026 05:13:41 +0000 (0:00:00.159) 0:30:12.936 ******* 2026-03-18 05:13:43.385660 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.385671 | orchestrator | 2026-03-18 05:13:43.385682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-18 05:13:43.385692 | orchestrator | Wednesday 18 March 2026 05:13:42 +0000 (0:00:01.520) 0:30:14.457 ******* 2026-03-18 05:13:43.385703 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:13:43.385714 | orchestrator | 2026-03-18 05:13:43.385725 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-18 05:13:43.385736 | orchestrator | Wednesday 18 March 2026 05:13:42 +0000 (0:00:00.147) 0:30:14.605 ******* 2026-03-18 05:13:43.385746 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-18 05:13:43.385757 | orchestrator | 2026-03-18 05:13:43.385768 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-18 05:13:43.385779 | orchestrator | Wednesday 18 March 2026 05:13:43 +0000 (0:00:00.227) 0:30:14.832 ******* 2026-03-18 05:13:43.385789 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:13:43.385800 | orchestrator | 2026-03-18 05:13:43.385811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-18 05:13:43.385831 | orchestrator | Wednesday 18 March 2026 05:13:43 +0000 (0:00:00.154) 0:30:14.987 ******* 2026-03-18 05:14:03.932362 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932534 | orchestrator | 2026-03-18 05:14:03.932555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-18 05:14:03.932568 | orchestrator | Wednesday 18 March 2026 05:13:43 +0000 (0:00:00.443) 0:30:15.430 ******* 2026-03-18 05:14:03.932580 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932591 | orchestrator | 2026-03-18 05:14:03.932603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-18 05:14:03.932614 | orchestrator | Wednesday 18 March 2026 05:13:43 +0000 (0:00:00.150) 0:30:15.581 ******* 2026-03-18 05:14:03.932625 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932636 | orchestrator | 2026-03-18 05:14:03.932647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-18 05:14:03.932658 | orchestrator | Wednesday 18 March 2026 05:13:44 +0000 (0:00:00.164) 0:30:15.746 ******* 2026-03-18 05:14:03.932669 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932680 | orchestrator | 2026-03-18 05:14:03.932691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-18 05:14:03.932702 | orchestrator | Wednesday 18 March 2026 05:13:44 +0000 (0:00:00.165) 0:30:15.912 ******* 2026-03-18 05:14:03.932713 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932724 | orchestrator | 2026-03-18 05:14:03.932735 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-18 05:14:03.932746 | orchestrator | Wednesday 18 March 2026 05:13:44 +0000 (0:00:00.168) 0:30:16.080 ******* 2026-03-18 05:14:03.932757 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932768 | orchestrator | 2026-03-18 05:14:03.932779 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-18 05:14:03.932790 | orchestrator | Wednesday 18 March 2026 05:13:44 +0000 (0:00:00.158) 0:30:16.238 ******* 2026-03-18 05:14:03.932801 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.932812 | orchestrator | 2026-03-18 05:14:03.932825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-18 05:14:03.932842 | orchestrator | Wednesday 18 March 2026 05:13:44 +0000 (0:00:00.180) 0:30:16.419 ******* 2026-03-18 05:14:03.932933 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:03.932955 | orchestrator | 2026-03-18 05:14:03.932976 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-18 05:14:03.932995 | orchestrator | Wednesday 18 March 2026 05:13:45 +0000 (0:00:00.242) 0:30:16.661 ******* 2026-03-18 05:14:03.933011 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-18 05:14:03.933024 | orchestrator | 2026-03-18 05:14:03.933037 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-18 05:14:03.933050 | orchestrator | Wednesday 18 March 2026 05:13:45 +0000 (0:00:00.221) 0:30:16.882 ******* 2026-03-18 05:14:03.933062 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-18 05:14:03.933080 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-18 05:14:03.933101 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-18 05:14:03.933136 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-18 05:14:03.933150 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-18 05:14:03.933162 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-18 05:14:03.933175 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-18 05:14:03.933187 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-18 05:14:03.933200 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-18 05:14:03.933213 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-18 05:14:03.933226 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-18 05:14:03.933236 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-18 05:14:03.933247 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-18 05:14:03.933258 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-18 05:14:03.933269 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-18 05:14:03.933280 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-18 05:14:03.933291 | orchestrator | 2026-03-18 05:14:03.933302 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-18 05:14:03.933313 | orchestrator | Wednesday 18 March 2026 05:13:50 +0000 (0:00:05.470) 0:30:22.353 ******* 2026-03-18 05:14:03.933324 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-18 05:14:03.933334 | orchestrator | 2026-03-18 05:14:03.933345 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-18 05:14:03.933355 | orchestrator | Wednesday 18 March 2026 05:13:51 +0000 (0:00:00.526) 0:30:22.880 ******* 2026-03-18 05:14:03.933366 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:14:03.933379 | orchestrator | 2026-03-18 05:14:03.933390 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-18 05:14:03.933400 | orchestrator | Wednesday 18 March 2026 05:13:51 +0000 (0:00:00.499) 0:30:23.379 ******* 2026-03-18 05:14:03.933411 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:14:03.933422 | orchestrator | 2026-03-18 05:14:03.933433 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-18 05:14:03.933443 | orchestrator | Wednesday 18 March 2026 05:13:52 +0000 (0:00:00.969) 0:30:24.349 ******* 2026-03-18 05:14:03.933454 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933465 | orchestrator | 2026-03-18 05:14:03.933475 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-18 05:14:03.933505 | orchestrator | Wednesday 18 March 2026 05:13:52 +0000 (0:00:00.147) 0:30:24.496 ******* 2026-03-18 05:14:03.933516 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933527 | orchestrator | 2026-03-18 05:14:03.933547 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-18 05:14:03.933558 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.170) 0:30:24.667 ******* 2026-03-18 05:14:03.933569 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933579 | orchestrator | 2026-03-18 05:14:03.933590 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-18 05:14:03.933601 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.142) 0:30:24.809 ******* 2026-03-18 05:14:03.933611 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933622 | orchestrator | 2026-03-18 05:14:03.933633 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-18 05:14:03.933643 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.136) 0:30:24.945 ******* 2026-03-18 05:14:03.933654 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933665 | orchestrator | 2026-03-18 05:14:03.933675 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-18 05:14:03.933687 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.156) 0:30:25.101 ******* 2026-03-18 05:14:03.933697 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933710 | orchestrator | 2026-03-18 05:14:03.933728 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-18 05:14:03.933745 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.140) 0:30:25.242 ******* 2026-03-18 05:14:03.933763 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933782 | orchestrator | 2026-03-18 05:14:03.933800 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-18 05:14:03.933812 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.143) 0:30:25.386 ******* 2026-03-18 05:14:03.933829 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933875 | orchestrator | 2026-03-18 05:14:03.933895 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-18 05:14:03.933913 | orchestrator | Wednesday 18 March 2026 05:13:53 +0000 (0:00:00.150) 0:30:25.536 ******* 2026-03-18 05:14:03.933930 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.933948 | orchestrator | 2026-03-18 05:14:03.933964 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-18 05:14:03.933983 | orchestrator | Wednesday 18 March 2026 05:13:54 +0000 (0:00:00.170) 0:30:25.706 ******* 2026-03-18 05:14:03.934001 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.934086 | orchestrator | 2026-03-18 05:14:03.934102 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-18 05:14:03.934113 | orchestrator | Wednesday 18 March 2026 05:13:54 +0000 (0:00:00.132) 0:30:25.839 ******* 2026-03-18 05:14:03.934124 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.934135 | orchestrator | 2026-03-18 05:14:03.934146 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-18 05:14:03.934165 | orchestrator | Wednesday 18 March 2026 05:13:54 +0000 (0:00:00.188) 0:30:26.028 ******* 2026-03-18 05:14:03.934191 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-18 05:14:03.934935 | orchestrator | 2026-03-18 05:14:03.934957 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-18 05:14:03.934968 | orchestrator | Wednesday 18 March 2026 05:13:59 +0000 (0:00:05.016) 0:30:31.044 ******* 2026-03-18 05:14:03.934980 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:14:03.934992 | orchestrator | 2026-03-18 05:14:03.935003 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-18 05:14:03.935014 | orchestrator | Wednesday 18 March 2026 05:13:59 +0000 (0:00:00.177) 0:30:31.221 ******* 2026-03-18 05:14:03.935029 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-18 05:14:03.935054 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-18 05:14:03.935067 | orchestrator | 2026-03-18 05:14:03.935079 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-18 05:14:03.935090 | orchestrator | Wednesday 18 March 2026 05:14:03 +0000 (0:00:03.864) 0:30:35.086 ******* 2026-03-18 05:14:03.935101 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.935112 | orchestrator | 2026-03-18 05:14:03.935122 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-18 05:14:03.935134 | orchestrator | Wednesday 18 March 2026 05:14:03 +0000 (0:00:00.160) 0:30:35.247 ******* 2026-03-18 05:14:03.935144 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.935156 | orchestrator | 2026-03-18 05:14:03.935167 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-18 05:14:03.935178 | orchestrator | Wednesday 18 March 2026 05:14:03 +0000 (0:00:00.139) 0:30:35.387 ******* 2026-03-18 05:14:03.935188 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:03.935197 | orchestrator | 2026-03-18 05:14:03.935207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-18 05:14:03.935230 | orchestrator | Wednesday 18 March 2026 05:14:03 +0000 (0:00:00.148) 0:30:35.535 ******* 2026-03-18 05:14:53.103849 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.103973 | orchestrator | 2026-03-18 05:14:53.103991 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-18 05:14:53.104005 | orchestrator | Wednesday 18 March 2026 05:14:04 +0000 (0:00:00.166) 0:30:35.702 ******* 2026-03-18 05:14:53.104016 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.104027 | orchestrator | 2026-03-18 05:14:53.104039 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-18 05:14:53.104049 | orchestrator | Wednesday 18 March 2026 05:14:04 +0000 (0:00:00.170) 0:30:35.872 ******* 2026-03-18 05:14:53.104060 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:53.104074 | orchestrator | 2026-03-18 05:14:53.104093 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-18 05:14:53.104112 | orchestrator | Wednesday 18 March 2026 05:14:04 +0000 (0:00:00.255) 0:30:36.128 ******* 2026-03-18 05:14:53.104145 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:14:53.104165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:14:53.104184 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:14:53.104201 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.104219 | orchestrator | 2026-03-18 05:14:53.104237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-18 05:14:53.104253 | orchestrator | Wednesday 18 March 2026 05:14:04 +0000 (0:00:00.462) 0:30:36.590 ******* 2026-03-18 05:14:53.104270 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:14:53.104287 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:14:53.104305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:14:53.104323 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.104341 | orchestrator | 2026-03-18 05:14:53.104359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-18 05:14:53.104382 | orchestrator | Wednesday 18 March 2026 05:14:05 +0000 (0:00:00.552) 0:30:37.143 ******* 2026-03-18 05:14:53.104405 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-18 05:14:53.104424 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-18 05:14:53.104479 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-18 05:14:53.104511 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.104541 | orchestrator | 2026-03-18 05:14:53.104561 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-18 05:14:53.104581 | orchestrator | Wednesday 18 March 2026 05:14:06 +0000 (0:00:00.851) 0:30:37.994 ******* 2026-03-18 05:14:53.104600 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:53.104619 | orchestrator | 2026-03-18 05:14:53.104668 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-18 05:14:53.104688 | orchestrator | Wednesday 18 March 2026 05:14:06 +0000 (0:00:00.170) 0:30:38.164 ******* 2026-03-18 05:14:53.104726 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-18 05:14:53.104738 | orchestrator | 2026-03-18 05:14:53.104749 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-18 05:14:53.104760 | orchestrator | Wednesday 18 March 2026 05:14:07 +0000 (0:00:01.159) 0:30:39.324 ******* 2026-03-18 05:14:53.104771 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:53.104781 | orchestrator | 2026-03-18 05:14:53.104792 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-18 05:14:53.104803 | orchestrator | Wednesday 18 March 2026 05:14:08 +0000 (0:00:00.847) 0:30:40.171 ******* 2026-03-18 05:14:53.104814 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-18 05:14:53.104824 | orchestrator | 2026-03-18 05:14:53.104835 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:14:53.104846 | orchestrator | Wednesday 18 March 2026 05:14:08 +0000 (0:00:00.202) 0:30:40.374 ******* 2026-03-18 05:14:53.104856 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:14:53.104867 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:14:53.104879 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:14:53.104889 | orchestrator | 2026-03-18 05:14:53.104900 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:14:53.104911 | orchestrator | Wednesday 18 March 2026 05:14:11 +0000 (0:00:02.253) 0:30:42.628 ******* 2026-03-18 05:14:53.104922 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-18 05:14:53.104933 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-18 05:14:53.104943 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:53.104954 | orchestrator | 2026-03-18 05:14:53.104965 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-18 05:14:53.104975 | orchestrator | Wednesday 18 March 2026 05:14:11 +0000 (0:00:00.972) 0:30:43.600 ******* 2026-03-18 05:14:53.104986 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.104997 | orchestrator | 2026-03-18 05:14:53.105007 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-18 05:14:53.105018 | orchestrator | Wednesday 18 March 2026 05:14:12 +0000 (0:00:00.142) 0:30:43.742 ******* 2026-03-18 05:14:53.105029 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-18 05:14:53.105040 | orchestrator | 2026-03-18 05:14:53.105051 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-18 05:14:53.105061 | orchestrator | Wednesday 18 March 2026 05:14:12 +0000 (0:00:00.210) 0:30:43.953 ******* 2026-03-18 05:14:53.105073 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:14:53.105086 | orchestrator | 2026-03-18 05:14:53.105096 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-18 05:14:53.105107 | orchestrator | Wednesday 18 March 2026 05:14:12 +0000 (0:00:00.650) 0:30:44.603 ******* 2026-03-18 05:14:53.105138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:14:53.105151 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-18 05:14:53.105174 | orchestrator | 2026-03-18 05:14:53.105185 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-18 05:14:53.105196 | orchestrator | Wednesday 18 March 2026 05:14:17 +0000 (0:00:04.083) 0:30:48.686 ******* 2026-03-18 05:14:53.105206 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-18 05:14:53.105217 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-18 05:14:53.105228 | orchestrator | 2026-03-18 05:14:53.105239 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-18 05:14:53.105249 | orchestrator | Wednesday 18 March 2026 05:14:19 +0000 (0:00:02.866) 0:30:51.553 ******* 2026-03-18 05:14:53.105261 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-18 05:14:53.105272 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:14:53.105283 | orchestrator | 2026-03-18 05:14:53.105294 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-18 05:14:53.105305 | orchestrator | Wednesday 18 March 2026 05:14:20 +0000 (0:00:01.015) 0:30:52.568 ******* 2026-03-18 05:14:53.105316 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-18 05:14:53.105326 | orchestrator | 2026-03-18 05:14:53.105337 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-18 05:14:53.105348 | orchestrator | Wednesday 18 March 2026 05:14:21 +0000 (0:00:00.238) 0:30:52.807 ******* 2026-03-18 05:14:53.105359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105414 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.105424 | orchestrator | 2026-03-18 05:14:53.105435 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-18 05:14:53.105452 | orchestrator | Wednesday 18 March 2026 05:14:21 +0000 (0:00:00.621) 0:30:53.428 ******* 2026-03-18 05:14:53.105463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-18 05:14:53.105517 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.105528 | orchestrator | 2026-03-18 05:14:53.105538 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-18 05:14:53.105549 | orchestrator | Wednesday 18 March 2026 05:14:22 +0000 (0:00:00.616) 0:30:54.044 ******* 2026-03-18 05:14:53.105560 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:14:53.105571 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:14:53.105589 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:14:53.105600 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:14:53.105612 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-18 05:14:53.105623 | orchestrator | 2026-03-18 05:14:53.105660 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-18 05:14:53.105677 | orchestrator | Wednesday 18 March 2026 05:14:52 +0000 (0:00:30.521) 0:31:24.565 ******* 2026-03-18 05:14:53.105688 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:14:53.105699 | orchestrator | 2026-03-18 05:14:53.105709 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-18 05:14:53.105728 | orchestrator | Wednesday 18 March 2026 05:14:53 +0000 (0:00:00.145) 0:31:24.711 ******* 2026-03-18 05:15:21.458341 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.458460 | orchestrator | 2026-03-18 05:15:21.458477 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-18 05:15:21.458491 | orchestrator | Wednesday 18 March 2026 05:14:53 +0000 (0:00:00.139) 0:31:24.851 ******* 2026-03-18 05:15:21.458502 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-18 05:15:21.458514 | orchestrator | 2026-03-18 05:15:21.458591 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-18 05:15:21.458602 | orchestrator | Wednesday 18 March 2026 05:14:53 +0000 (0:00:00.236) 0:31:25.088 ******* 2026-03-18 05:15:21.458614 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-18 05:15:21.458624 | orchestrator | 2026-03-18 05:15:21.458635 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-18 05:15:21.458646 | orchestrator | Wednesday 18 March 2026 05:14:53 +0000 (0:00:00.227) 0:31:25.315 ******* 2026-03-18 05:15:21.458657 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.458670 | orchestrator | 2026-03-18 05:15:21.458681 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-18 05:15:21.458692 | orchestrator | Wednesday 18 March 2026 05:14:55 +0000 (0:00:01.356) 0:31:26.671 ******* 2026-03-18 05:15:21.458703 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.458714 | orchestrator | 2026-03-18 05:15:21.458724 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-18 05:15:21.458735 | orchestrator | Wednesday 18 March 2026 05:14:56 +0000 (0:00:00.961) 0:31:27.633 ******* 2026-03-18 05:15:21.458746 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.458757 | orchestrator | 2026-03-18 05:15:21.458767 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-18 05:15:21.458778 | orchestrator | Wednesday 18 March 2026 05:14:57 +0000 (0:00:01.289) 0:31:28.923 ******* 2026-03-18 05:15:21.458790 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-18 05:15:21.458803 | orchestrator | 2026-03-18 05:15:21.458814 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-18 05:15:21.458825 | orchestrator | skipping: no hosts matched 2026-03-18 05:15:21.458836 | orchestrator | 2026-03-18 05:15:21.458847 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-18 05:15:21.458857 | orchestrator | skipping: no hosts matched 2026-03-18 05:15:21.458868 | orchestrator | 2026-03-18 05:15:21.458880 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-18 05:15:21.458893 | orchestrator | skipping: no hosts matched 2026-03-18 05:15:21.458930 | orchestrator | 2026-03-18 05:15:21.458943 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-18 05:15:21.458956 | orchestrator | 2026-03-18 05:15:21.458968 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-18 05:15:21.458994 | orchestrator | Wednesday 18 March 2026 05:15:00 +0000 (0:00:03.514) 0:31:32.437 ******* 2026-03-18 05:15:21.459007 | orchestrator | changed: [testbed-node-0] 2026-03-18 05:15:21.459020 | orchestrator | changed: [testbed-node-1] 2026-03-18 05:15:21.459032 | orchestrator | changed: [testbed-node-2] 2026-03-18 05:15:21.459045 | orchestrator | changed: [testbed-node-3] 2026-03-18 05:15:21.459057 | orchestrator | changed: [testbed-node-4] 2026-03-18 05:15:21.459069 | orchestrator | changed: [testbed-node-5] 2026-03-18 05:15:21.459082 | orchestrator | 2026-03-18 05:15:21.459094 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-18 05:15:21.459107 | orchestrator | Wednesday 18 March 2026 05:15:02 +0000 (0:00:01.487) 0:31:33.925 ******* 2026-03-18 05:15:21.459120 | orchestrator | changed: [testbed-node-0] 2026-03-18 05:15:21.459133 | orchestrator | changed: [testbed-node-1] 2026-03-18 05:15:21.459145 | orchestrator | changed: [testbed-node-2] 2026-03-18 05:15:21.459158 | orchestrator | changed: [testbed-node-3] 2026-03-18 05:15:21.459170 | orchestrator | changed: [testbed-node-4] 2026-03-18 05:15:21.459182 | orchestrator | changed: [testbed-node-5] 2026-03-18 05:15:21.459195 | orchestrator | 2026-03-18 05:15:21.459207 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:15:21.459219 | orchestrator | Wednesday 18 March 2026 05:15:04 +0000 (0:00:02.580) 0:31:36.506 ******* 2026-03-18 05:15:21.459231 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.459244 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.459257 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.459269 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.459280 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.459290 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.459301 | orchestrator | 2026-03-18 05:15:21.459312 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:15:21.459322 | orchestrator | Wednesday 18 March 2026 05:15:06 +0000 (0:00:01.238) 0:31:37.744 ******* 2026-03-18 05:15:21.459333 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.459343 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.459354 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.459364 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.459375 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.459385 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.459396 | orchestrator | 2026-03-18 05:15:21.459407 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-18 05:15:21.459418 | orchestrator | Wednesday 18 March 2026 05:15:07 +0000 (0:00:01.162) 0:31:38.907 ******* 2026-03-18 05:15:21.459430 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 05:15:21.459442 | orchestrator | 2026-03-18 05:15:21.459453 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-18 05:15:21.459463 | orchestrator | Wednesday 18 March 2026 05:15:08 +0000 (0:00:00.965) 0:31:39.873 ******* 2026-03-18 05:15:21.459474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 05:15:21.459485 | orchestrator | 2026-03-18 05:15:21.459541 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-18 05:15:21.459566 | orchestrator | Wednesday 18 March 2026 05:15:09 +0000 (0:00:01.176) 0:31:41.050 ******* 2026-03-18 05:15:21.459584 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.459602 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.459620 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.459637 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.459654 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.459683 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.459701 | orchestrator | 2026-03-18 05:15:21.459719 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-18 05:15:21.459738 | orchestrator | Wednesday 18 March 2026 05:15:11 +0000 (0:00:01.951) 0:31:43.001 ******* 2026-03-18 05:15:21.459756 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.459774 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.459792 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.459808 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.459825 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.459843 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.459861 | orchestrator | 2026-03-18 05:15:21.459880 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-18 05:15:21.459899 | orchestrator | Wednesday 18 March 2026 05:15:12 +0000 (0:00:01.024) 0:31:44.026 ******* 2026-03-18 05:15:21.459918 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.459937 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.459956 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.459974 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.459993 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.460011 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.460030 | orchestrator | 2026-03-18 05:15:21.460049 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-18 05:15:21.460068 | orchestrator | Wednesday 18 March 2026 05:15:13 +0000 (0:00:01.354) 0:31:45.381 ******* 2026-03-18 05:15:21.460087 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.460103 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.460114 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.460124 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.460135 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.460146 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.460156 | orchestrator | 2026-03-18 05:15:21.460167 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-18 05:15:21.460178 | orchestrator | Wednesday 18 March 2026 05:15:14 +0000 (0:00:01.088) 0:31:46.469 ******* 2026-03-18 05:15:21.460189 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.460200 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.460210 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.460221 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.460232 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.460242 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.460253 | orchestrator | 2026-03-18 05:15:21.460264 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-18 05:15:21.460275 | orchestrator | Wednesday 18 March 2026 05:15:15 +0000 (0:00:01.068) 0:31:47.537 ******* 2026-03-18 05:15:21.460295 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.460306 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.460317 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.460328 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.460339 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.460350 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.460360 | orchestrator | 2026-03-18 05:15:21.460371 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-18 05:15:21.460382 | orchestrator | Wednesday 18 March 2026 05:15:16 +0000 (0:00:00.676) 0:31:48.214 ******* 2026-03-18 05:15:21.460393 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.460403 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.460414 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.460425 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.460435 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.460446 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.460456 | orchestrator | 2026-03-18 05:15:21.460467 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-18 05:15:21.460488 | orchestrator | Wednesday 18 March 2026 05:15:17 +0000 (0:00:00.933) 0:31:49.148 ******* 2026-03-18 05:15:21.460499 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.460511 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.460547 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.460560 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.460572 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.460584 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.460597 | orchestrator | 2026-03-18 05:15:21.460610 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-18 05:15:21.460622 | orchestrator | Wednesday 18 March 2026 05:15:18 +0000 (0:00:01.058) 0:31:50.206 ******* 2026-03-18 05:15:21.460634 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.460646 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.460659 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.460671 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:21.460683 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:21.460695 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:21.460707 | orchestrator | 2026-03-18 05:15:21.460720 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-18 05:15:21.460733 | orchestrator | Wednesday 18 March 2026 05:15:19 +0000 (0:00:01.144) 0:31:51.351 ******* 2026-03-18 05:15:21.460745 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:21.460758 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:21.460770 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:21.460782 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.460794 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.460806 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.460818 | orchestrator | 2026-03-18 05:15:21.460830 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-18 05:15:21.460843 | orchestrator | Wednesday 18 March 2026 05:15:20 +0000 (0:00:01.045) 0:31:52.396 ******* 2026-03-18 05:15:21.460855 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:21.460868 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:21.460878 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:21.460889 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:21.460900 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:21.460913 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:21.460933 | orchestrator | 2026-03-18 05:15:21.460968 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-18 05:15:54.736645 | orchestrator | Wednesday 18 March 2026 05:15:21 +0000 (0:00:00.667) 0:31:53.064 ******* 2026-03-18 05:15:54.736764 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.736783 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.736802 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.736821 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.736840 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.736858 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.736877 | orchestrator | 2026-03-18 05:15:54.736897 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-18 05:15:54.736915 | orchestrator | Wednesday 18 March 2026 05:15:22 +0000 (0:00:00.975) 0:31:54.039 ******* 2026-03-18 05:15:54.736935 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.736954 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.736973 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.736986 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.736998 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.737009 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.737020 | orchestrator | 2026-03-18 05:15:54.737031 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-18 05:15:54.737042 | orchestrator | Wednesday 18 March 2026 05:15:23 +0000 (0:00:00.769) 0:31:54.809 ******* 2026-03-18 05:15:54.737053 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.737065 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.737075 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.737113 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.737125 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.737136 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.737146 | orchestrator | 2026-03-18 05:15:54.737159 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-18 05:15:54.737172 | orchestrator | Wednesday 18 March 2026 05:15:24 +0000 (0:00:00.977) 0:31:55.787 ******* 2026-03-18 05:15:54.737184 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.737197 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.737210 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.737222 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.737235 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.737247 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.737260 | orchestrator | 2026-03-18 05:15:54.737272 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-18 05:15:54.737285 | orchestrator | Wednesday 18 March 2026 05:15:24 +0000 (0:00:00.669) 0:31:56.456 ******* 2026-03-18 05:15:54.737297 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.737309 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.737321 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.737333 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.737346 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.737358 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.737370 | orchestrator | 2026-03-18 05:15:54.737382 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-18 05:15:54.737446 | orchestrator | Wednesday 18 March 2026 05:15:25 +0000 (0:00:00.983) 0:31:57.439 ******* 2026-03-18 05:15:54.737460 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737473 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.737485 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.737497 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.737510 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.737521 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.737532 | orchestrator | 2026-03-18 05:15:54.737543 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-18 05:15:54.737553 | orchestrator | Wednesday 18 March 2026 05:15:26 +0000 (0:00:00.661) 0:31:58.101 ******* 2026-03-18 05:15:54.737564 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737575 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.737586 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.737596 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.737607 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.737618 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.737628 | orchestrator | 2026-03-18 05:15:54.737639 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-18 05:15:54.737650 | orchestrator | Wednesday 18 March 2026 05:15:27 +0000 (0:00:00.988) 0:31:59.090 ******* 2026-03-18 05:15:54.737660 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737671 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.737682 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.737692 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.737703 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.737714 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.737724 | orchestrator | 2026-03-18 05:15:54.737735 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-18 05:15:54.737746 | orchestrator | Wednesday 18 March 2026 05:15:28 +0000 (0:00:01.219) 0:32:00.309 ******* 2026-03-18 05:15:54.737757 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737767 | orchestrator | 2026-03-18 05:15:54.737778 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-18 05:15:54.737789 | orchestrator | Wednesday 18 March 2026 05:15:30 +0000 (0:00:02.125) 0:32:02.435 ******* 2026-03-18 05:15:54.737799 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737810 | orchestrator | 2026-03-18 05:15:54.737821 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-18 05:15:54.737840 | orchestrator | Wednesday 18 March 2026 05:15:33 +0000 (0:00:02.666) 0:32:05.101 ******* 2026-03-18 05:15:54.737851 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737862 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.737872 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.737883 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.737894 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.737905 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.737915 | orchestrator | 2026-03-18 05:15:54.737926 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-18 05:15:54.737937 | orchestrator | Wednesday 18 March 2026 05:15:35 +0000 (0:00:01.664) 0:32:06.766 ******* 2026-03-18 05:15:54.737948 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.737958 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.737969 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.737979 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.737990 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.738000 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.738011 | orchestrator | 2026-03-18 05:15:54.738063 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-18 05:15:54.738096 | orchestrator | Wednesday 18 March 2026 05:15:36 +0000 (0:00:01.054) 0:32:07.820 ******* 2026-03-18 05:15:54.738109 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-18 05:15:54.738121 | orchestrator | 2026-03-18 05:15:54.738132 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-18 05:15:54.738144 | orchestrator | Wednesday 18 March 2026 05:15:38 +0000 (0:00:01.833) 0:32:09.654 ******* 2026-03-18 05:15:54.738154 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.738165 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.738176 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.738186 | orchestrator | ok: [testbed-node-3] 2026-03-18 05:15:54.738197 | orchestrator | ok: [testbed-node-4] 2026-03-18 05:15:54.738208 | orchestrator | ok: [testbed-node-5] 2026-03-18 05:15:54.738219 | orchestrator | 2026-03-18 05:15:54.738230 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-18 05:15:54.738241 | orchestrator | Wednesday 18 March 2026 05:15:39 +0000 (0:00:01.887) 0:32:11.541 ******* 2026-03-18 05:15:54.738251 | orchestrator | changed: [testbed-node-1] 2026-03-18 05:15:54.738262 | orchestrator | changed: [testbed-node-0] 2026-03-18 05:15:54.738273 | orchestrator | changed: [testbed-node-3] 2026-03-18 05:15:54.738284 | orchestrator | changed: [testbed-node-4] 2026-03-18 05:15:54.738295 | orchestrator | changed: [testbed-node-5] 2026-03-18 05:15:54.738306 | orchestrator | changed: [testbed-node-2] 2026-03-18 05:15:54.738316 | orchestrator | 2026-03-18 05:15:54.738327 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-18 05:15:54.738339 | orchestrator | 2026-03-18 05:15:54.738349 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:15:54.738360 | orchestrator | Wednesday 18 March 2026 05:15:43 +0000 (0:00:03.827) 0:32:15.369 ******* 2026-03-18 05:15:54.738371 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.738382 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.738411 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.738422 | orchestrator | 2026-03-18 05:15:54.738433 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:15:54.738444 | orchestrator | Wednesday 18 March 2026 05:15:44 +0000 (0:00:00.692) 0:32:16.062 ******* 2026-03-18 05:15:54.738454 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.738465 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:15:54.738476 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:15:54.738487 | orchestrator | 2026-03-18 05:15:54.738497 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-18 05:15:54.738580 | orchestrator | Wednesday 18 March 2026 05:15:45 +0000 (0:00:00.885) 0:32:16.947 ******* 2026-03-18 05:15:54.738595 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:15:54.738617 | orchestrator | 2026-03-18 05:15:54.738628 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-18 05:15:54.738646 | orchestrator | Wednesday 18 March 2026 05:15:46 +0000 (0:00:01.317) 0:32:18.264 ******* 2026-03-18 05:15:54.738657 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.738668 | orchestrator | 2026-03-18 05:15:54.738679 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-18 05:15:54.738690 | orchestrator | 2026-03-18 05:15:54.738700 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-18 05:15:54.738711 | orchestrator | Wednesday 18 March 2026 05:15:47 +0000 (0:00:01.238) 0:32:19.502 ******* 2026-03-18 05:15:54.738722 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.738733 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.738744 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.738754 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.738765 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.738776 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.738787 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:15:54.738798 | orchestrator | 2026-03-18 05:15:54.738809 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:15:54.738819 | orchestrator | Wednesday 18 March 2026 05:15:49 +0000 (0:00:01.139) 0:32:20.642 ******* 2026-03-18 05:15:54.738831 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.738841 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.738852 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.738863 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.738874 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.738884 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.738895 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:15:54.738906 | orchestrator | 2026-03-18 05:15:54.738917 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-18 05:15:54.738928 | orchestrator | Wednesday 18 March 2026 05:15:50 +0000 (0:00:01.643) 0:32:22.285 ******* 2026-03-18 05:15:54.738939 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.738949 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.738960 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.738971 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.738981 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.738992 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.739003 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:15:54.739014 | orchestrator | 2026-03-18 05:15:54.739025 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-18 05:15:54.739035 | orchestrator | Wednesday 18 March 2026 05:15:52 +0000 (0:00:01.602) 0:32:23.887 ******* 2026-03-18 05:15:54.739046 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.739057 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.739068 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.739079 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:15:54.739089 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:15:54.739100 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:15:54.739111 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:15:54.739122 | orchestrator | 2026-03-18 05:15:54.739133 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-18 05:15:54.739143 | orchestrator | Wednesday 18 March 2026 05:15:53 +0000 (0:00:01.629) 0:32:25.516 ******* 2026-03-18 05:15:54.739154 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:15:54.739165 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:15:54.739176 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:15:54.739196 | orchestrator | skipping: [testbed-node-3] 2026-03-18 05:16:15.291262 | orchestrator | skipping: [testbed-node-4] 2026-03-18 05:16:15.291508 | orchestrator | skipping: [testbed-node-5] 2026-03-18 05:16:15.291539 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291584 | orchestrator | 2026-03-18 05:16:15.291598 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-18 05:16:15.291610 | orchestrator | 2026-03-18 05:16:15.291622 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-18 05:16:15.291633 | orchestrator | Wednesday 18 March 2026 05:15:56 +0000 (0:00:02.477) 0:32:27.994 ******* 2026-03-18 05:16:15.291645 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-18 05:16:15.291656 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-18 05:16:15.291667 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-18 05:16:15.291679 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291690 | orchestrator | 2026-03-18 05:16:15.291700 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-18 05:16:15.291711 | orchestrator | Wednesday 18 March 2026 05:15:56 +0000 (0:00:00.196) 0:32:28.191 ******* 2026-03-18 05:16:15.291722 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291733 | orchestrator | 2026-03-18 05:16:15.291743 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-18 05:16:15.291754 | orchestrator | Wednesday 18 March 2026 05:15:56 +0000 (0:00:00.159) 0:32:28.351 ******* 2026-03-18 05:16:15.291765 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291776 | orchestrator | 2026-03-18 05:16:15.291786 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-18 05:16:15.291797 | orchestrator | Wednesday 18 March 2026 05:15:56 +0000 (0:00:00.175) 0:32:28.526 ******* 2026-03-18 05:16:15.291810 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291822 | orchestrator | 2026-03-18 05:16:15.291834 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-18 05:16:15.291847 | orchestrator | Wednesday 18 March 2026 05:15:57 +0000 (0:00:00.174) 0:32:28.701 ******* 2026-03-18 05:16:15.291861 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291873 | orchestrator | 2026-03-18 05:16:15.291885 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-18 05:16:15.291898 | orchestrator | Wednesday 18 March 2026 05:15:57 +0000 (0:00:00.280) 0:32:28.982 ******* 2026-03-18 05:16:15.291910 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-18 05:16:15.291923 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-18 05:16:15.291936 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.291948 | orchestrator | 2026-03-18 05:16:15.291960 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-18 05:16:15.291973 | orchestrator | Wednesday 18 March 2026 05:15:57 +0000 (0:00:00.168) 0:32:29.150 ******* 2026-03-18 05:16:15.292000 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292011 | orchestrator | 2026-03-18 05:16:15.292022 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-18 05:16:15.292033 | orchestrator | Wednesday 18 March 2026 05:15:57 +0000 (0:00:00.172) 0:32:29.323 ******* 2026-03-18 05:16:15.292043 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292054 | orchestrator | 2026-03-18 05:16:15.292065 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-18 05:16:15.292075 | orchestrator | Wednesday 18 March 2026 05:15:57 +0000 (0:00:00.168) 0:32:29.492 ******* 2026-03-18 05:16:15.292086 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292097 | orchestrator | 2026-03-18 05:16:15.292107 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-18 05:16:15.292118 | orchestrator | Wednesday 18 March 2026 05:15:58 +0000 (0:00:00.521) 0:32:30.014 ******* 2026-03-18 05:16:15.292129 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-18 05:16:15.292139 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-18 05:16:15.292150 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292161 | orchestrator | 2026-03-18 05:16:15.292171 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-18 05:16:15.292190 | orchestrator | Wednesday 18 March 2026 05:15:58 +0000 (0:00:00.184) 0:32:30.198 ******* 2026-03-18 05:16:15.292201 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292211 | orchestrator | 2026-03-18 05:16:15.292222 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-18 05:16:15.292233 | orchestrator | Wednesday 18 March 2026 05:15:58 +0000 (0:00:00.179) 0:32:30.377 ******* 2026-03-18 05:16:15.292243 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292254 | orchestrator | 2026-03-18 05:16:15.292265 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-18 05:16:15.292275 | orchestrator | Wednesday 18 March 2026 05:15:59 +0000 (0:00:00.266) 0:32:30.644 ******* 2026-03-18 05:16:15.292287 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292297 | orchestrator | 2026-03-18 05:16:15.292337 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-18 05:16:15.292359 | orchestrator | Wednesday 18 March 2026 05:15:59 +0000 (0:00:00.161) 0:32:30.805 ******* 2026-03-18 05:16:15.292377 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:15.292395 | orchestrator | 2026-03-18 05:16:15.292407 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-18 05:16:15.292417 | orchestrator | 2026-03-18 05:16:15.292428 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-18 05:16:15.292439 | orchestrator | Wednesday 18 March 2026 05:16:00 +0000 (0:00:01.160) 0:32:31.965 ******* 2026-03-18 05:16:15.292450 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292461 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292471 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292482 | orchestrator | 2026-03-18 05:16:15.292493 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-18 05:16:15.292504 | orchestrator | Wednesday 18 March 2026 05:16:00 +0000 (0:00:00.605) 0:32:32.571 ******* 2026-03-18 05:16:15.292514 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292525 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292555 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292567 | orchestrator | 2026-03-18 05:16:15.292578 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-18 05:16:15.292589 | orchestrator | Wednesday 18 March 2026 05:16:01 +0000 (0:00:00.353) 0:32:32.925 ******* 2026-03-18 05:16:15.292600 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292611 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292622 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292633 | orchestrator | 2026-03-18 05:16:15.292644 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-18 05:16:15.292654 | orchestrator | Wednesday 18 March 2026 05:16:01 +0000 (0:00:00.361) 0:32:33.286 ******* 2026-03-18 05:16:15.292665 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292676 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292687 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292698 | orchestrator | 2026-03-18 05:16:15.292709 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-18 05:16:15.292720 | orchestrator | Wednesday 18 March 2026 05:16:02 +0000 (0:00:00.660) 0:32:33.947 ******* 2026-03-18 05:16:15.292730 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292741 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292752 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292763 | orchestrator | 2026-03-18 05:16:15.292774 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-18 05:16:15.292784 | orchestrator | Wednesday 18 March 2026 05:16:02 +0000 (0:00:00.621) 0:32:34.569 ******* 2026-03-18 05:16:15.292795 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292806 | orchestrator | skipping: [testbed-node-1] 2026-03-18 05:16:15.292817 | orchestrator | skipping: [testbed-node-2] 2026-03-18 05:16:15.292827 | orchestrator | 2026-03-18 05:16:15.292838 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-18 05:16:15.292856 | orchestrator | Wednesday 18 March 2026 05:16:03 +0000 (0:00:00.343) 0:32:34.912 ******* 2026-03-18 05:16:15.292867 | orchestrator | skipping: [testbed-node-0] 2026-03-18 05:16:15.292878 | orchestrator | 2026-03-18 05:16:15.292889 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-18 05:16:15.292899 | orchestrator | 2026-03-18 05:16:15.292910 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-18 05:16:15.292921 | orchestrator | Wednesday 18 March 2026 05:16:04 +0000 (0:00:01.180) 0:32:36.092 ******* 2026-03-18 05:16:15.292932 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.292943 | orchestrator | 2026-03-18 05:16:15.292954 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-18 05:16:15.292964 | orchestrator | Wednesday 18 March 2026 05:16:04 +0000 (0:00:00.514) 0:32:36.607 ******* 2026-03-18 05:16:15.292975 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.292986 | orchestrator | 2026-03-18 05:16:15.292997 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-18 05:16:15.293013 | orchestrator | Wednesday 18 March 2026 05:16:05 +0000 (0:00:00.263) 0:32:36.870 ******* 2026-03-18 05:16:15.293024 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293035 | orchestrator | 2026-03-18 05:16:15.293046 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-18 05:16:15.293057 | orchestrator | Wednesday 18 March 2026 05:16:05 +0000 (0:00:00.153) 0:32:37.024 ******* 2026-03-18 05:16:15.293067 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293078 | orchestrator | 2026-03-18 05:16:15.293089 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-18 05:16:15.293100 | orchestrator | Wednesday 18 March 2026 05:16:07 +0000 (0:00:01.899) 0:32:38.923 ******* 2026-03-18 05:16:15.293110 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293121 | orchestrator | 2026-03-18 05:16:15.293132 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-18 05:16:15.293143 | orchestrator | Wednesday 18 March 2026 05:16:09 +0000 (0:00:02.061) 0:32:40.984 ******* 2026-03-18 05:16:15.293153 | orchestrator | changed: [testbed-node-0] 2026-03-18 05:16:15.293164 | orchestrator | 2026-03-18 05:16:15.293175 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-18 05:16:15.293186 | orchestrator | 2026-03-18 05:16:15.293197 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-18 05:16:15.293207 | orchestrator | Wednesday 18 March 2026 05:16:10 +0000 (0:00:01.150) 0:32:42.135 ******* 2026-03-18 05:16:15.293218 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293229 | orchestrator | ok: [testbed-node-1] 2026-03-18 05:16:15.293240 | orchestrator | ok: [testbed-node-2] 2026-03-18 05:16:15.293250 | orchestrator | 2026-03-18 05:16:15.293261 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-18 05:16:15.293287 | orchestrator | Wednesday 18 March 2026 05:16:11 +0000 (0:00:00.826) 0:32:42.962 ******* 2026-03-18 05:16:15.293298 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293329 | orchestrator | 2026-03-18 05:16:15.293341 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-18 05:16:15.293352 | orchestrator | Wednesday 18 March 2026 05:16:12 +0000 (0:00:01.325) 0:32:44.287 ******* 2026-03-18 05:16:15.293363 | orchestrator | ok: [testbed-node-0] 2026-03-18 05:16:15.293374 | orchestrator | 2026-03-18 05:16:15.293384 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 05:16:15.293396 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-18 05:16:15.293408 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-18 05:16:15.293420 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-03-18 05:16:15.293439 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-03-18 05:16:15.293457 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-03-18 05:16:16.039850 | orchestrator | testbed-node-3 : ok=311  changed=21  unreachable=0 failed=0 skipped=341  rescued=0 ignored=0 2026-03-18 05:16:16.039959 | orchestrator | testbed-node-4 : ok=307  changed=17  unreachable=0 failed=0 skipped=352  rescued=0 ignored=0 2026-03-18 05:16:16.039974 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-18 05:16:16.039987 | orchestrator | 2026-03-18 05:16:16.039998 | orchestrator | 2026-03-18 05:16:16.040009 | orchestrator | 2026-03-18 05:16:16.040020 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 05:16:16.040032 | orchestrator | Wednesday 18 March 2026 05:16:15 +0000 (0:00:02.590) 0:32:46.877 ******* 2026-03-18 05:16:16.040043 | orchestrator | =============================================================================== 2026-03-18 05:16:16.040054 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 73.02s 2026-03-18 05:16:16.040064 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 73.00s 2026-03-18 05:16:16.040075 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 47.01s 2026-03-18 05:16:16.040086 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.14s 2026-03-18 05:16:16.040096 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.59s 2026-03-18 05:16:16.040107 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.55s 2026-03-18 05:16:16.040117 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.52s 2026-03-18 05:16:16.040128 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 26.61s 2026-03-18 05:16:16.040139 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-03-18 05:16:16.040149 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-03-18 05:16:16.040160 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 19.96s 2026-03-18 05:16:16.040172 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 15.64s 2026-03-18 05:16:16.040183 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.62s 2026-03-18 05:16:16.040211 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.83s 2026-03-18 05:16:16.040223 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 12.08s 2026-03-18 05:16:16.040234 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.80s 2026-03-18 05:16:16.040244 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.19s 2026-03-18 05:16:16.040255 | orchestrator | Stop standby ceph mds -------------------------------------------------- 10.25s 2026-03-18 05:16:16.040265 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.14s 2026-03-18 05:16:16.040276 | orchestrator | Stop ceph osd ----------------------------------------------------------- 9.40s 2026-03-18 05:16:16.366713 | orchestrator | + osism apply cephclient 2026-03-18 05:16:18.544719 | orchestrator | 2026-03-18 05:16:18 | INFO  | Task d4546fd5-2d02-411f-aedc-6fd55454413c (cephclient) was prepared for execution. 2026-03-18 05:16:18.544818 | orchestrator | 2026-03-18 05:16:18 | INFO  | It takes a moment until task d4546fd5-2d02-411f-aedc-6fd55454413c (cephclient) has been started and output is visible here. 2026-03-18 05:16:48.339690 | orchestrator | 2026-03-18 05:16:48.339804 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-18 05:16:48.339861 | orchestrator | 2026-03-18 05:16:48.339873 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-18 05:16:48.339883 | orchestrator | Wednesday 18 March 2026 05:16:25 +0000 (0:00:02.198) 0:00:02.199 ******* 2026-03-18 05:16:48.339893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-18 05:16:48.339905 | orchestrator | 2026-03-18 05:16:48.339915 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-18 05:16:48.339924 | orchestrator | Wednesday 18 March 2026 05:16:27 +0000 (0:00:01.806) 0:00:04.005 ******* 2026-03-18 05:16:48.339935 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-18 05:16:48.339945 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-18 05:16:48.339955 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-18 05:16:48.339965 | orchestrator | 2026-03-18 05:16:48.339975 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-18 05:16:48.339984 | orchestrator | Wednesday 18 March 2026 05:16:29 +0000 (0:00:02.576) 0:00:06.581 ******* 2026-03-18 05:16:48.339994 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-18 05:16:48.340004 | orchestrator | 2026-03-18 05:16:48.340014 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-18 05:16:48.340023 | orchestrator | Wednesday 18 March 2026 05:16:31 +0000 (0:00:02.155) 0:00:08.737 ******* 2026-03-18 05:16:48.340033 | orchestrator | ok: [testbed-manager] 2026-03-18 05:16:48.340043 | orchestrator | 2026-03-18 05:16:48.340052 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-18 05:16:48.340062 | orchestrator | Wednesday 18 March 2026 05:16:33 +0000 (0:00:01.872) 0:00:10.610 ******* 2026-03-18 05:16:48.340071 | orchestrator | ok: [testbed-manager] 2026-03-18 05:16:48.340080 | orchestrator | 2026-03-18 05:16:48.340090 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-18 05:16:48.340099 | orchestrator | Wednesday 18 March 2026 05:16:35 +0000 (0:00:01.944) 0:00:12.554 ******* 2026-03-18 05:16:48.340109 | orchestrator | ok: [testbed-manager] 2026-03-18 05:16:48.340118 | orchestrator | 2026-03-18 05:16:48.340128 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-18 05:16:48.340137 | orchestrator | Wednesday 18 March 2026 05:16:37 +0000 (0:00:02.056) 0:00:14.611 ******* 2026-03-18 05:16:48.340147 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-18 05:16:48.340157 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-18 05:16:48.340167 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-18 05:16:48.340176 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-18 05:16:48.340186 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-18 05:16:48.340233 | orchestrator | 2026-03-18 05:16:48.340247 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-18 05:16:48.340258 | orchestrator | Wednesday 18 March 2026 05:16:43 +0000 (0:00:06.038) 0:00:20.650 ******* 2026-03-18 05:16:48.340269 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-18 05:16:48.340280 | orchestrator | 2026-03-18 05:16:48.340292 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-18 05:16:48.340303 | orchestrator | Wednesday 18 March 2026 05:16:45 +0000 (0:00:01.468) 0:00:22.119 ******* 2026-03-18 05:16:48.340313 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:48.340324 | orchestrator | 2026-03-18 05:16:48.340336 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-18 05:16:48.340347 | orchestrator | Wednesday 18 March 2026 05:16:46 +0000 (0:00:01.107) 0:00:23.226 ******* 2026-03-18 05:16:48.340357 | orchestrator | skipping: [testbed-manager] 2026-03-18 05:16:48.340368 | orchestrator | 2026-03-18 05:16:48.340379 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-18 05:16:48.340403 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-18 05:16:48.340415 | orchestrator | 2026-03-18 05:16:48.340426 | orchestrator | 2026-03-18 05:16:48.340436 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-18 05:16:48.340447 | orchestrator | Wednesday 18 March 2026 05:16:48 +0000 (0:00:01.569) 0:00:24.796 ******* 2026-03-18 05:16:48.340458 | orchestrator | =============================================================================== 2026-03-18 05:16:48.340469 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 6.04s 2026-03-18 05:16:48.340493 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.58s 2026-03-18 05:16:48.340504 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.16s 2026-03-18 05:16:48.340515 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.06s 2026-03-18 05:16:48.340525 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.94s 2026-03-18 05:16:48.340536 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.87s 2026-03-18 05:16:48.340547 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.81s 2026-03-18 05:16:48.340558 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.57s 2026-03-18 05:16:48.340569 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.47s 2026-03-18 05:16:48.340579 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.11s 2026-03-18 05:16:48.651555 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-18 05:16:48.651635 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-18 05:16:48.658436 | orchestrator | + set -e 2026-03-18 05:16:48.658490 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-18 05:16:48.658502 | orchestrator | ++ export INTERACTIVE=false 2026-03-18 05:16:48.658519 | orchestrator | ++ INTERACTIVE=false 2026-03-18 05:16:48.658528 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-18 05:16:48.658536 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-18 05:16:48.658544 | orchestrator | + source /opt/manager-vars.sh 2026-03-18 05:16:48.658551 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-18 05:16:48.658559 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-18 05:16:48.658567 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-18 05:16:48.658575 | orchestrator | ++ CEPH_VERSION=reef 2026-03-18 05:16:48.658583 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-18 05:16:48.658590 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-18 05:16:48.658598 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-18 05:16:48.658606 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-18 05:16:48.658614 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-18 05:16:48.658622 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-18 05:16:48.658629 | orchestrator | ++ export ARA=false 2026-03-18 05:16:48.658637 | orchestrator | ++ ARA=false 2026-03-18 05:16:48.658645 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-18 05:16:48.658652 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-18 05:16:48.658660 | orchestrator | ++ export TEMPEST=false 2026-03-18 05:16:48.658668 | orchestrator | ++ TEMPEST=false 2026-03-18 05:16:48.658675 | orchestrator | ++ export IS_ZUUL=true 2026-03-18 05:16:48.658683 | orchestrator | ++ IS_ZUUL=true 2026-03-18 05:16:48.658691 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 05:16:48.658699 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.43 2026-03-18 05:16:48.658706 | orchestrator | ++ export EXTERNAL_API=false 2026-03-18 05:16:48.658714 | orchestrator | ++ EXTERNAL_API=false 2026-03-18 05:16:48.658721 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-18 05:16:48.658729 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-18 05:16:48.658736 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-18 05:16:48.658744 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-18 05:16:48.658752 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-18 05:16:48.658759 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-18 05:16:48.658767 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-18 05:16:48.658774 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-18 05:16:48.658782 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-18 05:16:48.659710 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-18 05:16:48.666277 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-18 05:16:48.666298 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-18 05:16:48.666307 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-18 05:16:48.666315 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-18 05:17:10.750245 | orchestrator | 2026-03-18 05:17:10 | ERROR  | Unable to get ansible vault password 2026-03-18 05:17:10.750363 | orchestrator | 2026-03-18 05:17:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-18 05:17:10.750381 | orchestrator | 2026-03-18 05:17:10 | ERROR  | Dropping encrypted entries 2026-03-18 05:17:10.789277 | orchestrator | 2026-03-18 05:17:10 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-18 05:17:10.790299 | orchestrator | 2026-03-18 05:17:10 | INFO  | Kolla configuration check passed 2026-03-18 05:17:10.975546 | orchestrator | 2026-03-18 05:17:10 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-18 05:17:10.995291 | orchestrator | 2026-03-18 05:17:10 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-18 05:17:11.343104 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-18 05:17:32.381594 | orchestrator | 2026-03-18 05:17:32 | ERROR  | Unable to get ansible vault password 2026-03-18 05:17:32.381715 | orchestrator | 2026-03-18 05:17:32 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-18 05:17:32.381732 | orchestrator | 2026-03-18 05:17:32 | ERROR  | Dropping encrypted entries 2026-03-18 05:17:32.425022 | orchestrator | 2026-03-18 05:17:32 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-18 05:17:32.572085 | orchestrator | 2026-03-18 05:17:32 | INFO  | Found 207 classic queue(s) in vhost '/': 2026-03-18 05:17:32.572196 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-18 05:17:32.572343 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-18 05:17:32.572366 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-18 05:17:32.572379 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-18 05:17:32.572408 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican.workers_fanout_527ab840c3ba491bae6aa1b5f360d282 (vhost: /, messages: 0) 2026-03-18 05:17:32.572422 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican.workers_fanout_a7eaedb13eac4570b5a2433d3767b556 (vhost: /, messages: 0) 2026-03-18 05:17:32.572523 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican.workers_fanout_d1341b9d88dd41a39fd897909891ebc1 (vhost: /, messages: 0) 2026-03-18 05:17:32.572536 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-18 05:17:32.572547 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central (vhost: /, messages: 0) 2026-03-18 05:17:32.572571 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.572583 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.572594 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.572606 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central_fanout_0ad89b788dac40e992a668579934ef73 (vhost: /, messages: 0) 2026-03-18 05:17:32.572617 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central_fanout_50932205da9e4203b6b39fef78f3ab08 (vhost: /, messages: 0) 2026-03-18 05:17:32.572894 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central_fanout_6919dd7cf70349d5a1b770cf0a47334e (vhost: /, messages: 0) 2026-03-18 05:17:32.573029 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central_fanout_90d9a5b74195440b9e935bee4742f848 (vhost: /, messages: 0) 2026-03-18 05:17:32.573425 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - central_fanout_d4852009cc21485d858c2919cbb56c4d (vhost: /, messages: 0) 2026-03-18 05:17:32.573451 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-18 05:17:32.573657 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.573893 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.574510 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.574535 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup_fanout_64eb66de1c964edbaeacfda8cbe94144 (vhost: /, messages: 0) 2026-03-18 05:17:32.574546 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup_fanout_abe141aa2f334648b26b3e4d60add1da (vhost: /, messages: 0) 2026-03-18 05:17:32.574555 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-backup_fanout_d2ba9f9de88e4363b3f7a73b8a6b7348 (vhost: /, messages: 0) 2026-03-18 05:17:32.574841 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-18 05:17:32.575070 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.575352 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.575686 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.575706 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler_fanout_3164c29448c545088ae7b7a45f10621f (vhost: /, messages: 0) 2026-03-18 05:17:32.576116 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler_fanout_60d9b81159214ea8912ac596a18d85b7 (vhost: /, messages: 0) 2026-03-18 05:17:32.576324 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-scheduler_fanout_97fb5fa980b94c34b415084348699f16 (vhost: /, messages: 0) 2026-03-18 05:17:32.576435 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-18 05:17:32.576461 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-18 05:17:32.576702 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.577000 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_ae0c87f3d6d549b590ff89c7750509f3 (vhost: /, messages: 0) 2026-03-18 05:17:32.577019 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-18 05:17:32.577243 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.577524 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_0a1aaad3ba864d2da39ddd62cf0bae92 (vhost: /, messages: 0) 2026-03-18 05:17:32.577687 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-18 05:17:32.577905 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.578676 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_ad62270b4a04481ca2c989af3308495f (vhost: /, messages: 0) 2026-03-18 05:17:32.578771 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume_fanout_5d53a34bc6d64f42b8caa145520fe17e (vhost: /, messages: 0) 2026-03-18 05:17:32.578786 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume_fanout_9bd0680a8e8846838f54a3dd287a72f9 (vhost: /, messages: 0) 2026-03-18 05:17:32.578798 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - cinder-volume_fanout_cf27d50db23f47e98619602a49d8c92b (vhost: /, messages: 0) 2026-03-18 05:17:32.579226 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-18 05:17:32.579540 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-18 05:17:32.579564 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-18 05:17:32.579572 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-18 05:17:32.579580 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute_fanout_669641a639a34ecdb05b893c9ebd6609 (vhost: /, messages: 0) 2026-03-18 05:17:32.579784 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute_fanout_c245cf0075d64ee0af290c01bf936083 (vhost: /, messages: 0) 2026-03-18 05:17:32.579798 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - compute_fanout_f18f19c04f294ce8846eb846aa3e8612 (vhost: /, messages: 0) 2026-03-18 05:17:32.579876 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-18 05:17:32.579888 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.579896 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.580135 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.580152 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_36ec1f13d90f4765bbb30e0f2046d362 (vhost: /, messages: 0) 2026-03-18 05:17:32.580411 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_3a824fc10a27452e92e87f4355bfa073 (vhost: /, messages: 0) 2026-03-18 05:17:32.580563 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_441b81e1915d4827bc1dfee8c1e6a104 (vhost: /, messages: 0) 2026-03-18 05:17:32.580579 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_aa5a823477dd4d5ba205ff989457a9cd (vhost: /, messages: 0) 2026-03-18 05:17:32.580701 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_cfba186c3df14342b4b5f7d26cbdc9c3 (vhost: /, messages: 0) 2026-03-18 05:17:32.580987 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - conductor_fanout_f6f167e572aa4ba3bf56c079d8bcc52f (vhost: /, messages: 0) 2026-03-18 05:17:32.581004 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - event.sample (vhost: /, messages: 4) 2026-03-18 05:17:32.581383 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-18 05:17:32.581406 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor.cj6dx37si3ej (vhost: /, messages: 0) 2026-03-18 05:17:32.581695 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor.miafvffmkkv6 (vhost: /, messages: 0) 2026-03-18 05:17:32.581716 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor.tbsbeme724kd (vhost: /, messages: 0) 2026-03-18 05:17:32.581782 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_001a1f70820a471aa89e7b02ae535d6f (vhost: /, messages: 0) 2026-03-18 05:17:32.581941 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_06a3133114684860ad279e3b4e6eaeb6 (vhost: /, messages: 0) 2026-03-18 05:17:32.582197 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_37daf486392548b8a42ea7a9cf624b4b (vhost: /, messages: 0) 2026-03-18 05:17:32.582221 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_47e24b9d151d4f7fba0e7b6da767480e (vhost: /, messages: 0) 2026-03-18 05:17:32.582572 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_6b3322ed72494359bc368c4642183175 (vhost: /, messages: 0) 2026-03-18 05:17:32.582588 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_9ccfd2726ab841edb4b59db11406098c (vhost: /, messages: 0) 2026-03-18 05:17:32.582878 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_c8438647a1d84c62be192dd2f0d9f1f4 (vhost: /, messages: 0) 2026-03-18 05:17:32.583030 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_e2a29e50691f4635a9259c49cd180060 (vhost: /, messages: 0) 2026-03-18 05:17:32.583058 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - magnum-conductor_fanout_e7915e05f7754d659d69a60395b21b3e (vhost: /, messages: 0) 2026-03-18 05:17:32.583066 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-18 05:17:32.583208 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.583220 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.583346 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.583622 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data_fanout_3e8f8f40cf21447bb0d9443f1ffb7138 (vhost: /, messages: 0) 2026-03-18 05:17:32.583762 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data_fanout_7b9606ecd44f4ed6ad5bd0026360aa01 (vhost: /, messages: 0) 2026-03-18 05:17:32.583773 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-data_fanout_a8abd5f607cf4818b11fe420bb77f5e0 (vhost: /, messages: 0) 2026-03-18 05:17:32.583882 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-18 05:17:32.584034 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.584257 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.584325 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.584462 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler_fanout_22f4ba85a37c402281b5c057ab5abad8 (vhost: /, messages: 0) 2026-03-18 05:17:32.584621 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler_fanout_8c9811dd007d48ec9b1f93e9e894f5e1 (vhost: /, messages: 0) 2026-03-18 05:17:32.584794 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-scheduler_fanout_9c686c735a064edc888a62607109bfa3 (vhost: /, messages: 0) 2026-03-18 05:17:32.584955 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-18 05:17:32.585093 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-18 05:17:32.585259 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-18 05:17:32.585434 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-18 05:17:32.585559 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share_fanout_0f28257752f74d7b8886b27e0b7b13b6 (vhost: /, messages: 0) 2026-03-18 05:17:32.585743 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share_fanout_434912ec66c74a5ba8ffeb983379de09 (vhost: /, messages: 0) 2026-03-18 05:17:32.585754 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - manila-share_fanout_61ef4633df08442d9e0464371d255f72 (vhost: /, messages: 0) 2026-03-18 05:17:32.585934 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-18 05:17:32.586124 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-18 05:17:32.586138 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-18 05:17:32.586595 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-18 05:17:32.586856 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-18 05:17:32.586995 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-18 05:17:32.587025 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-18 05:17:32.587081 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-18 05:17:32.587111 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.587287 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.587308 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.587493 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2_fanout_0e10871f93094c5789bd257a6b094393 (vhost: /, messages: 0) 2026-03-18 05:17:32.587633 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2_fanout_8d19fc2decfb4d4a914f4ea678e9ec38 (vhost: /, messages: 0) 2026-03-18 05:17:32.587801 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - octavia_provisioning_v2_fanout_f8ba1bf2935540fa8a0099f511f5c931 (vhost: /, messages: 0) 2026-03-18 05:17:32.587818 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-18 05:17:32.588014 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.588249 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.588272 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.588597 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_0d7f194d6c6f4be5a1ee334f31d49a37 (vhost: /, messages: 0) 2026-03-18 05:17:32.588729 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_11022f582ee943b59afec9062f08951f (vhost: /, messages: 0) 2026-03-18 05:17:32.588885 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_1a91059a8e0c4ed3bd2ea13c5d7a1904 (vhost: /, messages: 0) 2026-03-18 05:17:32.589076 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_a7765287ee9f490188e67d21fe37a997 (vhost: /, messages: 0) 2026-03-18 05:17:32.589292 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_f48ee6457dcf4c98ab3e80cfda8aae4b (vhost: /, messages: 0) 2026-03-18 05:17:32.589334 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - producer_fanout_fe50ee03d0874fb89180141a78d798bb (vhost: /, messages: 0) 2026-03-18 05:17:32.589584 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-18 05:17:32.589604 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.589730 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.589887 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.590009 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_49401d80b438482e95812fdfc540ab71 (vhost: /, messages: 0) 2026-03-18 05:17:32.590243 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_4fe2b742121b4c448b810176302b8ad9 (vhost: /, messages: 0) 2026-03-18 05:17:32.590380 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_51d5074ec4c445f4a30de7a05f28c49f (vhost: /, messages: 0) 2026-03-18 05:17:32.590520 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_86cb7de708ac43929b4acbfd9812dc74 (vhost: /, messages: 0) 2026-03-18 05:17:32.590546 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_871e0239d72e4903b1b7ade491bd1e5a (vhost: /, messages: 0) 2026-03-18 05:17:32.590566 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_953fca86214f47c3b26367648632d7b6 (vhost: /, messages: 0) 2026-03-18 05:17:32.590891 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_9fdbdf89ae2d4711ad0f60146e5754d0 (vhost: /, messages: 0) 2026-03-18 05:17:32.590919 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_e641ecc2bcd34fe1b2bc757b77cea9a3 (vhost: /, messages: 0) 2026-03-18 05:17:32.591015 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-plugin_fanout_f62adbb8fc134626a7e9f745ce43b927 (vhost: /, messages: 0) 2026-03-18 05:17:32.591198 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-18 05:17:32.591382 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.591549 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.591694 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.591852 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_066c2427cc204097afc2ffd1e388149b (vhost: /, messages: 0) 2026-03-18 05:17:32.591968 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_0e1f13da278b44f9bb282bab15790a19 (vhost: /, messages: 0) 2026-03-18 05:17:32.592100 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_16aabc2201054852a5e70f0aedd02e01 (vhost: /, messages: 0) 2026-03-18 05:17:32.592407 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_17dae1948d0c4d9dbb58ebc5f9fdcf54 (vhost: /, messages: 0) 2026-03-18 05:17:32.592432 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_1c6a05381cb34b9d9b7529206e567d3e (vhost: /, messages: 0) 2026-03-18 05:17:32.592621 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_1e4098881c224a1985076ecded4cfeab (vhost: /, messages: 0) 2026-03-18 05:17:32.592649 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_2b927c19a9d242f7bf3d9cd719f3cf9c (vhost: /, messages: 0) 2026-03-18 05:17:32.592816 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_4515d450e33c45c6aa24ecf2f7c36922 (vhost: /, messages: 0) 2026-03-18 05:17:32.592899 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_6640ac1c475246caba2098cfac6f9c75 (vhost: /, messages: 0) 2026-03-18 05:17:32.593100 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_8eab75d1692a4862bd38d8a928f97464 (vhost: /, messages: 0) 2026-03-18 05:17:32.593204 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_96fd5c47b93a46ad8f9399ce24cd858c (vhost: /, messages: 0) 2026-03-18 05:17:32.593337 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_b0a09c6c584c4fad8621c7d3a675481f (vhost: /, messages: 0) 2026-03-18 05:17:32.593487 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_be05e232c2ea4f7c810c42f88411ac9a (vhost: /, messages: 0) 2026-03-18 05:17:32.593636 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_bf279be945614d068bdf24c4038000b0 (vhost: /, messages: 0) 2026-03-18 05:17:32.593765 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_c0fe57124a0f46b5967cda293ed9b6ff (vhost: /, messages: 0) 2026-03-18 05:17:32.593879 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_d5dcef361cdf46ac8ed3ed63bb8650eb (vhost: /, messages: 0) 2026-03-18 05:17:32.594012 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_f1538a6db8944202aa65c8d1178b89bc (vhost: /, messages: 0) 2026-03-18 05:17:32.594168 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-reports-plugin_fanout_fb7eb84f663c43569302a8161da4382e (vhost: /, messages: 0) 2026-03-18 05:17:32.594301 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-18 05:17:32.594517 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.594531 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.594639 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.594775 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_58d8d99e399845cb8020744e3d41b9aa (vhost: /, messages: 0) 2026-03-18 05:17:32.594964 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_65c648e872164f85abb0101e2d3e9edf (vhost: /, messages: 0) 2026-03-18 05:17:32.595180 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_7a5e637e1e7a4df9a79489ffb0d9e6bb (vhost: /, messages: 0) 2026-03-18 05:17:32.595198 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_a43f2df52234428096a125302328ef93 (vhost: /, messages: 0) 2026-03-18 05:17:32.595540 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_acaadab06f7d4722a782822524af2592 (vhost: /, messages: 0) 2026-03-18 05:17:32.595767 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_de80af6f036f4328802c38b2b96fb62e (vhost: /, messages: 0) 2026-03-18 05:17:32.595940 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_e4d5c58d1fa0431da0b01480d91a396f (vhost: /, messages: 0) 2026-03-18 05:17:32.596493 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_eba5fc5bdaf74ae88673565c509302d6 (vhost: /, messages: 0) 2026-03-18 05:17:32.596513 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - q-server-resource-versions_fanout_ee7fcee087944b3986db49e1cb22ad26 (vhost: /, messages: 0) 2026-03-18 05:17:32.596535 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_0367d5aa1ff54cd290f3bd0663a09fd4 (vhost: /, messages: 0) 2026-03-18 05:17:32.596880 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_051892466a794769ac8d760985769a75 (vhost: /, messages: 0) 2026-03-18 05:17:32.596893 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_1f4d91a02f7f4b0eb81259e11fdace88 (vhost: /, messages: 0) 2026-03-18 05:17:32.597067 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_3181de339cb04c8fbbf66528d2185e1c (vhost: /, messages: 0) 2026-03-18 05:17:32.597203 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_564f6205b91c4a3ba97e528954cff2ea (vhost: /, messages: 0) 2026-03-18 05:17:32.597357 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_6346063f048f4b9c95f1201d570015e0 (vhost: /, messages: 0) 2026-03-18 05:17:32.597454 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_676c9de0dece4dcabaef388fd674edf8 (vhost: /, messages: 0) 2026-03-18 05:17:32.597666 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_67fa2a3f05b94b8f9a268d23199fa102 (vhost: /, messages: 0) 2026-03-18 05:17:32.597783 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_6d6fa5db44a245188520c8cd15697db7 (vhost: /, messages: 0) 2026-03-18 05:17:32.597972 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_8832580892e342c48d1833ef4b7aa411 (vhost: /, messages: 0) 2026-03-18 05:17:32.598126 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_9fbc823168b84f09be7fe3a2398e7fbf (vhost: /, messages: 0) 2026-03-18 05:17:32.598235 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_aa0524e21989459f8453be0d0acb79b3 (vhost: /, messages: 0) 2026-03-18 05:17:32.598405 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_b3cd1258164c4db0ab576322f581262d (vhost: /, messages: 0) 2026-03-18 05:17:32.598571 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_be5a254aefb74449946ee8daa1ea1dcf (vhost: /, messages: 0) 2026-03-18 05:17:32.598726 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_e5cfa553751e45bab25380e72464951b (vhost: /, messages: 0) 2026-03-18 05:17:32.598771 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_e7246282e0384dcc98449806cbf87907 (vhost: /, messages: 0) 2026-03-18 05:17:32.598925 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_f21cb3038eed4a3bbce1f72f331c63f4 (vhost: /, messages: 0) 2026-03-18 05:17:32.599024 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_fd5b9c26396f45368a3edd2a87147c51 (vhost: /, messages: 0) 2026-03-18 05:17:32.599260 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - reply_fea2708358ee4d59b93611504aee1aaa (vhost: /, messages: 0) 2026-03-18 05:17:32.599305 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-18 05:17:32.599590 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.599637 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.599813 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.599967 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_0044e0a079004f35a389c2e020741e3e (vhost: /, messages: 0) 2026-03-18 05:17:32.600124 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_2a594be6bf984f4b821335fe1d815125 (vhost: /, messages: 0) 2026-03-18 05:17:32.600203 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_371e4c9186a64297bf21ef901c9fb668 (vhost: /, messages: 0) 2026-03-18 05:17:32.600379 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_39387b5276fa44758ac2c14c1960398c (vhost: /, messages: 0) 2026-03-18 05:17:32.600473 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_9d1ea0c46a0a41c08771de9f24e34ca9 (vhost: /, messages: 0) 2026-03-18 05:17:32.600688 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - scheduler_fanout_aa12b356d5074d29a74fcc566bb8b2ec (vhost: /, messages: 0) 2026-03-18 05:17:32.600757 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-18 05:17:32.600960 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-18 05:17:32.601084 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-18 05:17:32.601195 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-18 05:17:32.601364 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_1119f75d10d0477a9d0121fd8149bf19 (vhost: /, messages: 0) 2026-03-18 05:17:32.601684 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_15e684d4f65243199ec4259499374c91 (vhost: /, messages: 0) 2026-03-18 05:17:32.601702 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_47c516c5c48d4f8ca0f20f772e40efef (vhost: /, messages: 0) 2026-03-18 05:17:32.602113 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_502f3f3c89c34124bef4693a5a944ecf (vhost: /, messages: 0) 2026-03-18 05:17:32.602156 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_dc59051430cb47b69277c6db36df4f21 (vhost: /, messages: 0) 2026-03-18 05:17:32.602171 | orchestrator | 2026-03-18 05:17:32 | INFO  |  - worker_fanout_e75ed2e4044245ccb53d4e1bc6576553 (vhost: /, messages: 0) 2026-03-18 05:17:32.924532 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-18 05:17:35.050676 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-18 05:17:35.050773 | orchestrator | [--no-close-connections] [--quorum] 2026-03-18 05:17:35.050791 | orchestrator | [--vhost VHOST] 2026-03-18 05:17:35.050803 | orchestrator | [{list,delete,prepare,check}] 2026-03-18 05:17:35.050815 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-18 05:17:35.050828 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-18 05:17:35.781352 | orchestrator | ERROR 2026-03-18 05:17:35.781557 | orchestrator | { 2026-03-18 05:17:35.781593 | orchestrator | "delta": "1:20:45.065150", 2026-03-18 05:17:35.781616 | orchestrator | "end": "2026-03-18 05:17:35.391719", 2026-03-18 05:17:35.781636 | orchestrator | "msg": "non-zero return code", 2026-03-18 05:17:35.781655 | orchestrator | "rc": 2, 2026-03-18 05:17:35.781673 | orchestrator | "start": "2026-03-18 03:56:50.326569" 2026-03-18 05:17:35.781690 | orchestrator | } failure 2026-03-18 05:17:36.022222 | 2026-03-18 05:17:36.022336 | PLAY RECAP 2026-03-18 05:17:36.022391 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-18 05:17:36.022415 | 2026-03-18 05:17:36.253409 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-18 05:17:36.255758 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-18 05:17:36.966732 | 2026-03-18 05:17:36.967007 | PLAY [Post output play] 2026-03-18 05:17:36.983974 | 2026-03-18 05:17:36.984109 | LOOP [stage-output : Register sources] 2026-03-18 05:17:37.056447 | 2026-03-18 05:17:37.056828 | TASK [stage-output : Check sudo] 2026-03-18 05:17:37.944131 | orchestrator | sudo: a password is required 2026-03-18 05:17:38.095773 | orchestrator | ok: Runtime: 0:00:00.014301 2026-03-18 05:17:38.109928 | 2026-03-18 05:17:38.110101 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-18 05:17:38.148078 | 2026-03-18 05:17:38.148385 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-18 05:17:38.217640 | orchestrator | ok 2026-03-18 05:17:38.227156 | 2026-03-18 05:17:38.227342 | LOOP [stage-output : Ensure target folders exist] 2026-03-18 05:17:38.681050 | orchestrator | ok: "docs" 2026-03-18 05:17:38.681440 | 2026-03-18 05:17:38.931349 | orchestrator | ok: "artifacts" 2026-03-18 05:17:39.179481 | orchestrator | ok: "logs" 2026-03-18 05:17:39.201802 | 2026-03-18 05:17:39.202000 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-18 05:17:39.242227 | 2026-03-18 05:17:39.242512 | TASK [stage-output : Make all log files readable] 2026-03-18 05:17:39.536283 | orchestrator | ok 2026-03-18 05:17:39.543113 | 2026-03-18 05:17:39.543290 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-18 05:17:39.577399 | orchestrator | skipping: Conditional result was False 2026-03-18 05:17:39.587551 | 2026-03-18 05:17:39.587694 | TASK [stage-output : Discover log files for compression] 2026-03-18 05:17:39.611744 | orchestrator | skipping: Conditional result was False 2026-03-18 05:17:39.622390 | 2026-03-18 05:17:39.622535 | LOOP [stage-output : Archive everything from logs] 2026-03-18 05:17:39.663124 | 2026-03-18 05:17:39.663336 | PLAY [Post cleanup play] 2026-03-18 05:17:39.672981 | 2026-03-18 05:17:39.673100 | TASK [Set cloud fact (Zuul deployment)] 2026-03-18 05:17:39.731064 | orchestrator | ok 2026-03-18 05:17:39.743128 | 2026-03-18 05:17:39.743282 | TASK [Set cloud fact (local deployment)] 2026-03-18 05:17:39.777371 | orchestrator | skipping: Conditional result was False 2026-03-18 05:17:39.793586 | 2026-03-18 05:17:39.793744 | TASK [Clean the cloud environment] 2026-03-18 05:17:40.408471 | orchestrator | 2026-03-18 05:17:40 - clean up servers 2026-03-18 05:17:41.178542 | orchestrator | 2026-03-18 05:17:41 - testbed-manager 2026-03-18 05:17:41.261599 | orchestrator | 2026-03-18 05:17:41 - testbed-node-3 2026-03-18 05:17:41.348522 | orchestrator | 2026-03-18 05:17:41 - testbed-node-2 2026-03-18 05:17:41.438597 | orchestrator | 2026-03-18 05:17:41 - testbed-node-0 2026-03-18 05:17:41.529204 | orchestrator | 2026-03-18 05:17:41 - testbed-node-4 2026-03-18 05:17:41.623368 | orchestrator | 2026-03-18 05:17:41 - testbed-node-1 2026-03-18 05:17:41.707825 | orchestrator | 2026-03-18 05:17:41 - testbed-node-5 2026-03-18 05:17:41.800265 | orchestrator | 2026-03-18 05:17:41 - clean up keypairs 2026-03-18 05:17:41.819605 | orchestrator | 2026-03-18 05:17:41 - testbed 2026-03-18 05:17:41.843738 | orchestrator | 2026-03-18 05:17:41 - wait for servers to be gone 2026-03-18 05:17:52.775147 | orchestrator | 2026-03-18 05:17:52 - clean up ports 2026-03-18 05:17:52.965741 | orchestrator | 2026-03-18 05:17:52 - 3d02d622-92a6-4c39-9acd-0c68d3451ccf 2026-03-18 05:17:53.259944 | orchestrator | 2026-03-18 05:17:53 - 3eac6123-9610-499c-83ec-c00dda37c847 2026-03-18 05:17:53.556507 | orchestrator | 2026-03-18 05:17:53 - 493256cb-443f-4165-ae2d-196cc45531d7 2026-03-18 05:17:53.989917 | orchestrator | 2026-03-18 05:17:53 - a48d3192-bbd7-4d29-a6ae-161c1a80e06e 2026-03-18 05:17:54.201247 | orchestrator | 2026-03-18 05:17:54 - cc9a090a-535b-4903-a003-5cbe563a06d4 2026-03-18 05:17:54.416463 | orchestrator | 2026-03-18 05:17:54 - dc52a224-7163-4e2c-bb9f-c42f6d47e26b 2026-03-18 05:17:54.636944 | orchestrator | 2026-03-18 05:17:54 - fbb0d3f9-d9cf-4daa-b8a3-f4cc95479d6b 2026-03-18 05:17:54.865234 | orchestrator | 2026-03-18 05:17:54 - clean up volumes 2026-03-18 05:17:55.002419 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-0-node-base 2026-03-18 05:17:55.042923 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-2-node-base 2026-03-18 05:17:55.089281 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-1-node-base 2026-03-18 05:17:55.134353 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-4-node-base 2026-03-18 05:17:55.177497 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-3-node-base 2026-03-18 05:17:55.223210 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-5-node-base 2026-03-18 05:17:55.272325 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-7-node-4 2026-03-18 05:17:55.315225 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-manager-base 2026-03-18 05:17:55.357907 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-3-node-3 2026-03-18 05:17:55.402650 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-0-node-3 2026-03-18 05:17:55.444295 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-6-node-3 2026-03-18 05:17:55.485313 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-1-node-4 2026-03-18 05:17:55.528824 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-8-node-5 2026-03-18 05:17:55.568920 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-4-node-4 2026-03-18 05:17:55.611899 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-2-node-5 2026-03-18 05:17:55.651877 | orchestrator | 2026-03-18 05:17:55 - testbed-volume-5-node-5 2026-03-18 05:17:55.693194 | orchestrator | 2026-03-18 05:17:55 - disconnect routers 2026-03-18 05:17:55.809237 | orchestrator | 2026-03-18 05:17:55 - testbed 2026-03-18 05:17:56.808433 | orchestrator | 2026-03-18 05:17:56 - clean up subnets 2026-03-18 05:17:56.847300 | orchestrator | 2026-03-18 05:17:56 - subnet-testbed-management 2026-03-18 05:17:57.017771 | orchestrator | 2026-03-18 05:17:57 - clean up networks 2026-03-18 05:17:57.211022 | orchestrator | 2026-03-18 05:17:57 - net-testbed-management 2026-03-18 05:17:57.490447 | orchestrator | 2026-03-18 05:17:57 - clean up security groups 2026-03-18 05:17:57.531151 | orchestrator | 2026-03-18 05:17:57 - testbed-node 2026-03-18 05:17:57.687497 | orchestrator | 2026-03-18 05:17:57 - testbed-management 2026-03-18 05:17:57.799433 | orchestrator | 2026-03-18 05:17:57 - clean up floating ips 2026-03-18 05:17:57.845370 | orchestrator | 2026-03-18 05:17:57 - 81.163.192.43 2026-03-18 05:17:58.231259 | orchestrator | 2026-03-18 05:17:58 - clean up routers 2026-03-18 05:17:58.339228 | orchestrator | 2026-03-18 05:17:58 - testbed 2026-03-18 05:17:59.856794 | orchestrator | ok: Runtime: 0:00:19.656207 2026-03-18 05:17:59.861334 | 2026-03-18 05:17:59.861529 | PLAY RECAP 2026-03-18 05:17:59.861657 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-18 05:17:59.861720 | 2026-03-18 05:17:59.997130 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-18 05:17:59.999399 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-18 05:18:00.739862 | 2026-03-18 05:18:00.740029 | PLAY [Cleanup play] 2026-03-18 05:18:00.756169 | 2026-03-18 05:18:00.756356 | TASK [Set cloud fact (Zuul deployment)] 2026-03-18 05:18:00.814858 | orchestrator | ok 2026-03-18 05:18:00.824863 | 2026-03-18 05:18:00.825005 | TASK [Set cloud fact (local deployment)] 2026-03-18 05:18:00.859449 | orchestrator | skipping: Conditional result was False 2026-03-18 05:18:00.873902 | 2026-03-18 05:18:00.874038 | TASK [Clean the cloud environment] 2026-03-18 05:18:02.028042 | orchestrator | 2026-03-18 05:18:02 - clean up servers 2026-03-18 05:18:02.512798 | orchestrator | 2026-03-18 05:18:02 - clean up keypairs 2026-03-18 05:18:02.530998 | orchestrator | 2026-03-18 05:18:02 - wait for servers to be gone 2026-03-18 05:18:02.574670 | orchestrator | 2026-03-18 05:18:02 - clean up ports 2026-03-18 05:18:02.649487 | orchestrator | 2026-03-18 05:18:02 - clean up volumes 2026-03-18 05:18:02.710231 | orchestrator | 2026-03-18 05:18:02 - disconnect routers 2026-03-18 05:18:02.741778 | orchestrator | 2026-03-18 05:18:02 - clean up subnets 2026-03-18 05:18:02.766617 | orchestrator | 2026-03-18 05:18:02 - clean up networks 2026-03-18 05:18:02.937935 | orchestrator | 2026-03-18 05:18:02 - clean up security groups 2026-03-18 05:18:02.974151 | orchestrator | 2026-03-18 05:18:02 - clean up floating ips 2026-03-18 05:18:02.997827 | orchestrator | 2026-03-18 05:18:02 - clean up routers 2026-03-18 05:18:03.913393 | orchestrator | ok: Runtime: 0:00:01.863097 2026-03-18 05:18:03.916820 | 2026-03-18 05:18:03.916960 | PLAY RECAP 2026-03-18 05:18:03.917062 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-18 05:18:03.917113 | 2026-03-18 05:18:04.040067 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-18 05:18:04.042533 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-18 05:18:04.815782 | 2026-03-18 05:18:04.815953 | PLAY [Base post-fetch] 2026-03-18 05:18:04.832220 | 2026-03-18 05:18:04.832353 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-18 05:18:04.887356 | orchestrator | skipping: Conditional result was False 2026-03-18 05:18:04.897445 | 2026-03-18 05:18:04.897614 | TASK [fetch-output : Set log path for single node] 2026-03-18 05:18:04.957757 | orchestrator | ok 2026-03-18 05:18:04.967915 | 2026-03-18 05:18:04.968059 | LOOP [fetch-output : Ensure local output dirs] 2026-03-18 05:18:05.442438 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/logs" 2026-03-18 05:18:05.725133 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/artifacts" 2026-03-18 05:18:06.008276 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/46f873c7bae843c8b90b2a18e80e4656/work/docs" 2026-03-18 05:18:06.031811 | 2026-03-18 05:18:06.032047 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-18 05:18:06.977857 | orchestrator | changed: .d..t...... ./ 2026-03-18 05:18:06.978101 | orchestrator | changed: All items complete 2026-03-18 05:18:06.978139 | 2026-03-18 05:18:07.731179 | orchestrator | changed: .d..t...... ./ 2026-03-18 05:18:08.470425 | orchestrator | changed: .d..t...... ./ 2026-03-18 05:18:08.494357 | 2026-03-18 05:18:08.494518 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-18 05:18:08.532667 | orchestrator | skipping: Conditional result was False 2026-03-18 05:18:08.536493 | orchestrator | skipping: Conditional result was False 2026-03-18 05:18:08.555466 | 2026-03-18 05:18:08.555605 | PLAY RECAP 2026-03-18 05:18:08.555687 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-18 05:18:08.555729 | 2026-03-18 05:18:08.700121 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-18 05:18:08.701138 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-18 05:18:09.451604 | 2026-03-18 05:18:09.451776 | PLAY [Base post] 2026-03-18 05:18:09.467088 | 2026-03-18 05:18:09.467298 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-18 05:18:10.461284 | orchestrator | changed 2026-03-18 05:18:10.473968 | 2026-03-18 05:18:10.474159 | PLAY RECAP 2026-03-18 05:18:10.474288 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-18 05:18:10.474398 | 2026-03-18 05:18:10.591161 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-18 05:18:10.592235 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-18 05:18:11.373381 | 2026-03-18 05:18:11.373562 | PLAY [Base post-logs] 2026-03-18 05:18:11.384464 | 2026-03-18 05:18:11.384603 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-18 05:18:11.859955 | localhost | changed 2026-03-18 05:18:11.875268 | 2026-03-18 05:18:11.875455 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-18 05:18:11.904092 | localhost | ok 2026-03-18 05:18:11.911346 | 2026-03-18 05:18:11.911519 | TASK [Set zuul-log-path fact] 2026-03-18 05:18:11.929360 | localhost | ok 2026-03-18 05:18:11.943328 | 2026-03-18 05:18:11.943480 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-18 05:18:11.971277 | localhost | ok 2026-03-18 05:18:11.977769 | 2026-03-18 05:18:11.977942 | TASK [upload-logs : Create log directories] 2026-03-18 05:18:12.487262 | localhost | changed 2026-03-18 05:18:12.491327 | 2026-03-18 05:18:12.491463 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-18 05:18:13.024635 | localhost -> localhost | ok: Runtime: 0:00:00.007241 2026-03-18 05:18:13.033943 | 2026-03-18 05:18:13.034170 | TASK [upload-logs : Upload logs to log server] 2026-03-18 05:18:13.632032 | localhost | Output suppressed because no_log was given 2026-03-18 05:18:13.634605 | 2026-03-18 05:18:13.634736 | LOOP [upload-logs : Compress console log and json output] 2026-03-18 05:18:13.688123 | localhost | skipping: Conditional result was False 2026-03-18 05:18:13.694258 | localhost | skipping: Conditional result was False 2026-03-18 05:18:13.709264 | 2026-03-18 05:18:13.709436 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-18 05:18:13.757603 | localhost | skipping: Conditional result was False 2026-03-18 05:18:13.758157 | 2026-03-18 05:18:13.762107 | localhost | skipping: Conditional result was False 2026-03-18 05:18:13.771322 | 2026-03-18 05:18:13.771562 | LOOP [upload-logs : Upload console log and json output]